In addition, for multiplication the following rule is obeyed:
The last is optional. For multivariate polynomials (described elsewhere) both hold, but for matrices, only the addition property holds. The function MultiplyAddSparseTrees (described below) should not be used in these cases.
The storage is recursive. The sparse tree begins with a list of objects {n1,tree1} for values of n1 for the first item in the key. The tree1 part then contains a sub-tree for all the items in the database for which the value of the first item in the key is n1.
The above single element could be created with
In> r:=CreateSparseTree({1,2},3) Out> {{1,{{2,3}}}}; |
In> SparseTreeGet({1,2},r) Out> 3; In> SparseTreeGet({1,3},r) Out> 0; |
In> SparseTreeSet({1,2},r,Current+5) Out> 8; In> r Out> {{1,{{2,8}}}}; In> SparseTreeSet({1,3},r,Current+5) Out> 5; In> r Out> {{1,{{3,5},{2,8}}}}; |
The sparse tree can be traversed, one element at a time, with SparseTreeScan:
In> SparseTreeScan(Hold({{k,v},Echo({k,v})}),2,r) {1,3} 5 {1,2} 8 |
An example of the use of this function could be multiplying a sparse matrix with a sparse vector, where the entire matrix can be scanned with SparseTreeScan, and each non-zero matrix element A[i][j] can then be multiplied with a vector element v[j], and the result added to a sparse vector w[i], using the SparseTreeGet and SparseTreeSet functions. Multiplying two sparse matrices would require two nested calls to SparseTreeScan to multiply every item from one matrix with an element from the other, and add it to the appropriate element in the resulting sparse matrix.
When the matrix elements A[i][j] are defined by a function f(i,j) (which can be considered a dense representation), and it needs to be multiplied with a sparse vector v[j], it is better to iterate over the sparse vector v[j]. Representation defines the most efficient algorithm to use in this case.
The API to sparse trees is:
Concepts and ideas are taken from the books [Davenport et al. 1989] and [von zur Gathen et al. 1999].
A term is an object which can be written as
A multivariate polynomial is taken to be a sum over terms.
We write c[a]*x^a for a term, where a is a list of powers for the monomial, and c[a] the coefficient of the term.
It is useful to define an ordering of monomials, to be able to determine a canonical form of a multivariate.
For the currently implemented code the lexicographic order has been chosen:
This method is called lexicographic because it is similar to the way words are ordered in a usual dictionary.
For all algorithms (including division) there is some freedom in the ordering of monomials. One interesting advantage of the lexicographic order is that it can be implemented with a recursive data structure, where the first variable, x[1] can be treated as the main variable, thus presenting it as a univariate polynomial in x[1] with all its terms grouped together.
Other orderings can be used, by re-implementing a part of the code dealing with multivariate polynomials, and then selecting the new code to be used as a driver, as will be described later on.
Given the above ordering, the following definitions can be stated:
For a non-zero multivariate polynomial
with a monomial order:
The above define access to the leading monomial, which is used for divisions, gcd calculations and the like. Thus an implementation needs be able to determine {mdeg(f),lc(f)} . Note the similarity with the (key,value) pairs described in the sparse tree section. mdeg(f) can be thought of as a 'key', and lc(f) as a 'value'.
The multicontent, multicont(f), is defined to be a term that divides all the terms in f, and is the term described by ( Min(a), Gcd(c)), with Gcd(c) the GCD of all the coefficients, and Min(a) the lowest exponents for each variable, occurring in f for which c is non-zero.
The multiprimitive part is then defined as pp(f):=f/ multicont(f).
For a multivariate polynomial, the obvious addition and (distributive) multiplication rules hold:
(a+b) + (c+d) := a+b+c+d |
a*(b+c) := (a*b)+(a*c) |
These are supported in the Yacas system through a multiply-add operation:
An alternative is to store only the terms for which the coefficients are non-zero. This adds a little overhead to polynomials that could efficiently be stored in a dense representation, but it is still little memory, whereas large sparse polynomials are stored in acceptable memory too. It is of importance to still be able to add, multiply divide and get the leading term of a multivariate polynomial, when the polynomial is stored in a sparse representation.
For the representation, the data structure containing the (exponents,coefficient) pair can be viewed as a database holding (key,value) pairs, where the list of exponents is the key, and the coefficient of the term is the value stored for that key. Thus, for a variable set {x,y} the list {{1,2},3} represents 3*x*y^2.
Yacas stores multivariates internally as MultiNomial (vars, terms), where vars is the ordered list of variables, and terms some object storing all the (key, value) pairs representing the terms. Note we keep the storage vague: the terms placeholder is implemented by other code, as a database of terms. The specific representation can be configured at startup (this is described in more detail below).
For the current version, Yacas uses the 'sparse tree' representation, which is a recursive sparse representation. For example, for a variable set {x,y,z}, the 'terms' object contains a list of objects of form {deg,terms}, one for each degree deg for the variable 'x' occurring in the polynomial. The 'terms' part of this object is then a sub-sparse tree for the variables {y,z}.
An explicit example:
In> MM(3*x^2+y) Out> MultiNomial({x,y},{{2,{{0,3}}},{0,{{1,1}, {0,0}}}}); |
This representation is sparse:
In> r:=MM(x^1000+x) Out> MultiNomial({x},{{1000,1},{1,1}}); |
In> r*r Out> MultiNomial({x},{{2000,1},{1001,2}, {2,1},{0,0}}); In> NormalForm(%) Out> x^2000+2*x^1001+x^2; |
At the top level are the routines callable by the user or the rest of the system: MultiDegree, MultiDivide, MultiGcd, Groebner, etc. In general, this is the level implementing the operations actually desired.
The middle level does the book-keeping of the MultiNomial(vars,terms) expressions, using the functionality offered by the lowest level.
For the current system, the middle level is in multivar.rep/ sparsenomial.ys, and it uses the sparse tree representation implemented in sparsetree.ys.
The middle level is called the 'driver', and can be changed, or re-implemented if necessary. For instance, in case calculations need to be done for which dense representations are actually acceptable, one could write a C++ plugin implementing above-mentioned database structure, and then write a middle-level driver using the code. The driver can then be selected at startup. In the file 'yacasinit.ys' the default driver is chosen, but this can be overridden in the .yacasrc file or some file that is loaded, or at the command line, as long as it is done before the multivariates module is loaded (which loads the selected driver). Driver selection is as simple as setting a global variable to contain a file name of the file implementing the driver. In yacasinit.ys there is a line similar to:
Set(MultiNomialDriver, "multivar.rep/sparsenomial.ys"); |
The choice was made for static configuration of the driver before the system starts up because it is expected that there will in general be one best way of doing it, given a certain system with a certain set of libraries installed on the operating system, and for a specific version of Yacas. The best version can then be selected at start up, as a configuration step. The advantage of static selection is that no overhead is imposed: there is no performance penalty for the abstraction layers between the three levels.
Integrate can have its own set of rules for specific integrals, which might return a correct answer immediately. Alternatively, it calls the function AntiDeriv, to see if the anti-derivative can be determined for the integral requested. If this is the case, the anti-derivative is used to compose the output.
If the integration algorithm cannot perform the integral, the expression is returned unsimplified.
For the purpose of setting up the integration table, a few declaration functions have been defined, which use some generalized pattern matchers to be more flexible in recognizing expressions that are integrable.
The calling sequence for IntFunc is
IntFunc(variable,pattern,antiderivative) |
For instance, for the function Cos(x) there is a declaration:
IntFunc(x,Cos(_x),Sin(x)); |
The fact that the second argument is a pattern means that each occurrence of the variable to be matched should be referred to as _x, as in the example above.
IntFunc generalizes the integration implicitly, in that it will set up the system to actually recognize expressions of the form Cos(a*x+b), and return Sin(a*x+b)/a automatically. This means that the variables a and b are reserved, and can not be used in the pattern. Also, the variable used (in this case, _x is actually matched to the expression passed in to the function, and the variable var is the real variable being integrated over. To clarify: suppose the user wants to integrate Cos(c*y+d) over y, then the following variables are set:
When functions are multiplied by constants, that situation is handled by the integration rule that can deal with univariate polynomials multiplied by functions, as a constant is a polynomial of degree zero.
The general form for declaring anti-derivatives for such expressions is:
IntPureSquare(variable, pattern, sign2, sign0, antiderivative) |
The expression is searched for the pattern, where the variable can match to a sub-expression of the form a*x^2+b, and for which both a and b are numbers and a*sign2>0 and b*sign0>0.
As an example:
IntPureSquare(x,num_IsFreeOf(var)/(_x), 1,1,(num/(a*Sqrt(b/a)))* ArcTan(var/Sqrt(b/a))); |
It has been setup much like the integration algorithm. If the transformation algorithm cannot perform the transform, the expression (in theory) is returned unsimplified. Some cases may still erroneously return Undefined or Infinity.
The last operational property dealing with integration is not yet fully bug-tested, it sometimes returns Undefined or Infinity if the integral returns such.
For the purpose of setting up the transform table, a few declaration functions have been defined, which use some generalized pattern matchers to be more flexible in recognizing expressions that are transformable.
The calling sequence for LapTranDef is
LapTranDef( in, out ) |
Currently in must be a variable of _t and out must be a function of s. For instance, for the function Cos(t) there is a declaration:
LapTranDef( Cos(_t), s/(s^2+1) ); |
The fact that the first argument is a pattern means that each occurrence of the variable to be matched should be referred to as _t, as in the example above.
LapTranDef generalizes the transform implicitly, in that it will set up the system to actually recognize expressions of the form Cos(a*t) and Cos(t/a) , and return the appropriate answer. The way this is done is by three separate rules for case of t itself, a*t and t/a. This is similar to the MatchLinear function that Integrate uses, except LaplaceTransforms must have b=0.
Without loss of generality, the coefficients a[i] of a polynomial
Assuming that the leading coefficient a[n]=1, the polynomial p can also be written as
To find roots, it is useful to first remove the multiplicities, i.e. to convert the polynomial to one with multiplicity 1 for all irreducible factors, i.e. find the polynomial p[1]*...*p[m]. This is called the "square-free part" of the original polynomial p.
The square-free part of the polynomial p can be easily found using the polynomial GCD algorithm. The derivative of a polynomial p can be written as:
The g.c.d. of p and p' equals
In what follows we shall assume that all polynomials are square-free with rational coefficients. Given any polynomial, we can apply the functions SquareFree and Rationalize and reduce it to this form. The function Rationalize converts all numbers in an expression to rational numbers. The function SquareFree returns the square-free part of a polynomial. For example:
In> Expand((x+1.5)^5) Out> x^5+7.5*x^4+22.5*x^3+33.75*x^2+25.3125*x +7.59375; In> SquareFree(Rationalize(%)) Out> x/5+3/10; In> Simplify(%*5) Out> (2*x+3)/2; In> Expand(%) Out> x+3/2; |
The polynomial p can be assumed to have no multiple factors, and thus p and p' are relatively prime. The sequence of polynomials in the Sturm sequence are (up to a minus sign) the consecutive polynomials generated by Euclid's algorithm for the calculation of a greatest common divisor for p and p', so the last polynomial p[n] will be a constant.
In Yacas, the function SturmSequence(p) returns the Sturm sequence of p, assuming p is a univariate polynomial in x, p=p(x).
Given a Sturm sequence S=SturmSequence(p) of a polynomial p, the variation in the Sturm sequence V(S,y) is the number of sign changes in the sequence p[0], p[1] , ... , p[n], evaluated at point y, and disregarding zeroes in the sequence.
Sturm's theorem states that if a and b are two real numbers which are not roots of p, and a<b, then the number of roots between a and b is V(S,a)-V(S,b). A proof can be found in Knuth, The Art of Computer Programming, Volume 2, Seminumerical Algorithms.
For a and b, the values -Infinity and Infinity can also be used. In these cases, V(S,Infinity) is the number of sign changes between the leading coefficients of the elements of the Sturm sequence, and V(S,-Infinity) the same, but with a minus sign for the leading coefficients for which the degree is odd.
Thus, the number of real roots of a polynomial is V(S,-Infinity)-V(S,Infinity). The function NumRealRoots(p) returns the number of real roots of p.
The function SturmVariations(S,y) returns the number of sign changes between the elements in the Sturm sequence S, at point x=y:
In> p:=x^2-1 Out> x^2-1; In> S:=SturmSequence(p) Out> {x^2-1,2*x,1}; In> SturmVariations(S,-Infinity)- \ SturmVariations(S,Infinity) Out> 2; In> NumRealRoots(p) Out> 2; In> p:=x^2+1 Out> x^2+1; In> S:=SturmSequence(p) Out> {x^2+1,2*x,-1}; In> SturmVariations(S,-Infinity)- \ SturmVariations(S,Infinity) Out> 0; In> NumRealRoots(p) Out> 0; |
We thus know that given
Now we can start the search for the bounds on all roots. The search starts with initial upper and lower bounds on ranges, subdividing ranges until a range contains only one root, and adding that range to the resulting list of bounds. If, when dividing a range, the middle of the range lands on a root, care must be taken, because the bounds should not be on a root themselves. This can be solved by observing that if c is a root, p contains a factor x-c, and thus taking p(x+c) results in a polynomial with all the roots shifted by a constant -c, and the root c moved to zero, e.g. p(x+c) contains a factor x. Thus a new ranges to the left and right of c can be determined by first calculating the minimum bound M of p(x+c)/x. When the original range was ( a, b), and c=(a+b)/2 is a root, the new ranges should become ( a, c-M) and ( c+M, b).
In Yacas, MimimumBound(p) returns the lower bound described above, and MaximumBound(p) returns the upper bound on the roots in p. These bounds are returned as rational numbers. BoundRealRoots(p) returns a list with sublists with the bounds on the roots of a polynomial:
In> p:=(x+20)*(x+10) Out> (x+20)*(x+10); In> MinimumBound(p) Out> 10/3; In> MaximumBound(p) Out> 60; In> BoundRealRoots(p) Out> {{-95/3,-35/2},{-35/2,-10/3}}; In> N(%) Out> {{-31.6666666666,-17.5}, {-17.5,-3.3333333333}}; |
It should be noted that since all calculations are done with rational numbers, the algorithm for bounding the roots is very robust. This is important, as the roots can be very unstable for small variations in the coefficients of the polynomial in question (see Davenport).
The bisection method is more robust, but slower. It works by taking the middle of the range, and checking signs of the polynomial to select the half-range where the root is. As there is only one root in the range ( a, b), in general it will be true that p(a)*p(b)<0, which is assumed by this method.
Yacas finds the roots by first trying the secant method, starting in the middle of the range, c=(a+b)/2. If this fails the bisection method is tried.
The function call to find the real roots of a polynomial p in variable x is FindRealRoots(p), for example:
In> p:=Expand((x+3.1)*(x-6.23)) Out> x^2-3.13*x-19.313; In> FindRealRoots(p) Out> {-3.1,6.23}; In> p:=Expand((x+3.1)^3*(x-6.23)) Out> x^4+3.07*x^3-29.109*x^2-149.8199\ In> *x-185.59793; In> p:=SquareFree(Rationalize( \ In> Expand((x+3.1)^3*(x-6.23)))) Out> (-160000*x^2+500800*x+3090080)/2611467; In> FindRealRoots(p) Out> {-3.1,6.23}; |
To speed up the calculation when one of the numbers is much larger than another, one could use the property Gcd(a,b)=Gcd(a,Mod(a,b)). This will introduce an additional modular division into the algorithm; this is a slow operation when the numbers are large.
Primality of larger numbers is tested by the function IsPrime that uses the Miller-Rabin algorithm.
The idea of the Miller-Rabin algorithm is to improve the Fermat primality test. If n is prime, then for any x we have Gcd(n,x)=1. Then by Fermat's "little theorem", x^(n-1):=Mod(1,n). (This is really a simple statement; if n is prime, then n-1 nonzero remainders modulo n: 1, 2, ..., n-1 form a cyclic multiplicative group.) Therefore we pick some "base" integer x and compute Mod(x^(n-1),n); this is a quick computation even if n is large. If this value is not equal to 1 for some base x, then n is definitely not prime. However, we cannot test every base x<n; instead we test only some x, so it may happen that we miss the right values of x that would expose the non-primality of n. So Fermat's test sometimes fails, i.e. says that n is a prime when n is in fact not a prime. Also there are infinitely many integers called "Carmichael numbers" which are not prime but pass the Fermat test for every base.
The Miller-Rabin algorithm improves on this by using the property that for prime n there are no nontrivial square roots of unity in the ring of integers modulo n (this is Lagrange's theorem). In other words, if x^2:=Mod(1,n) for some x, then x must be equal to 1 or -1 modulo n. (Note that n-1 is equal to -1 modulo n, so n-1 is a trivial square root of unity modulo n.) In fact, if n is prime, there must be no divisors of 1 at all, i.e. no numbers x and y, not equal to 1 or -1 modulo n, such that x*y:=Mod(1,n). If we find such x, y, then Gcd(x,n)>1 or Gcd(y,n)>1 and n is not prime.
We can check that n is odd before applying any primality test. (A test n^2:=Mod(1,24) guarantees that n is not divisible by 2 or 3. For large n it is faster to first compute Mod(n,24) rather than n^2, or test n directly.) Then we note that in Fermat's test the number n-1 is certainly a composite number because n-1 is even. So if we first find the largest power of 2 in n-1 and decompose n-1=2^r*q with q odd, then x^(n-1):=Mod(a^2^r,n) where a:=Mod(x^q,n). (Here r>=1 since n is odd.) In other words, the number Mod(x^(n-1),n) is obtained by repeated squaring of the number a. We get a sequence of r repeated squares: a, a^2, ..., a^2^r. The last element of this sequence must be 1 if n passes the Fermat test. (If it does not pass, n is definitely a composite number.) If n passes the Fermat test, the last-but-one element a^2^(r-1) of the sequence of squares is a square root of unity modulo n. We can check whether this square root is non-trivial (i.e. not equal to 1 or -1 modulo n). If it is non-trivial, then n definitely cannot be a prime. If it is trivial and equal to 1, we can check the preceding element, and so on. If an element is equal to -1, we cannot say anything, i.e. the test passes ( n is "probably a prime").
This procedure can be summarized like this:
Here is a more formal definition. An odd integer n is called strongly-probably-prime for base b if b^q:=Mod(1,n) or b^(q*2^i):=Mod(n-1,n) for some i such that 0<=i<r, where q and r are such that q is odd and n-1=q*2^r.
A practical application of this procedure needs to select particular base numbers. It is advantageous (according to [Pomerance et al. 1980]) to choose prime numbers b as bases, because for a composite base b=p*q, if n is a strong pseudoprime for both p and q, then it is very probable that n is a strong pseudoprime also for b, so composite bases rarely give new information.
An additional check suggested by [Davenport 1992] is activated if r>2 (i.e. if n:=Mod(1,8) which is true for only 1/4 of all odd numbers). If i>=1 is found such that b^(q*2^i):=Mod(n-1,n), then b^(q*2^(i-1)) is a square root of -1 modulo n. If n is prime, there may be only two different square roots of -1. Therefore we should store the set of found values of roots of -1; if there are more than two such roots, then we woill find some roots s1, s2 of -1 such that s1+s2!=Mod(0,n). But s1^2-s2^2:=Mod(0,n). Therefore n is definitely composite, e.g. Gcd(s1+s2,n)>1. This check costs very little computational effort but guards against some strong pseudoprimes.
Yet another small improvement comes from [Damgard et al. 1993]. They found that the strong primality test sometimes (rarely) passes on composite numbers n for more than 1/8 of all bases x<n if n is such that either 3*n+1 or 8*n+1 is a perfect square, or if n is a Carmichael number. Checking Carmichael numbers is slow, but it is easy to show that if n is a large enough prime number, then neither 3*n+1, nor 8*n+1, nor any s*n+1 with small integer s can be a perfect square. [If s*n+1=r^2, then s*n=(r-1)*(r+1).] Testing for a perfect square is quick and does not slow down the algorithm. This is however not implemented in Yacas because it seems that perfect squares are too rare for this improvement to be significant.
If an integer is not "strongly-probably-prime" for a given base b, then it is a composite number. However, the converse statement is false, i.e. "strongly-probably-prime" numbers can actually be composite. Composite strongly-probably-prime numbers for base b are called strong pseudoprimes for base b. There is a theorem that if n is composite, then among all numbers b such that 1<b<n, at most one fourth are such that n is a strong pseudoprime for base b. Therefore if n is strongly-probably-prime for many bases, then the probability for n to be composite is very small.
For numbers less than B=34155071728321, exhaustive
In the implemented routine RabinMiller, the number of bases k is chosen to make the probability of erroneously passing the test p<10^(-25). (Note that this is not the same as the probability to give an incorrect answer, because all numbers that do not pass the test are definitely composite.) The probability for the test to pass mistakenly on a given number is found as follows. Suppose the number of bases k is fixed. Then the probability for a given composite number to pass the test is less than p[f]=4^(-k). The probability for a given number n to be prime is roughly p[p]=1/Ln(n) and to be composite p[c]=1-1/Ln(n). Prime numbers never fail the test. Therefore, the probability for the test to pass is p[f]*p[c]+p[p] and the probability to pass erroneously is
Before calling MillerRabin, the function IsPrime performs two quick checks: first, for n>=4 it checks that n is not divisible by 2 or 3 (all primes larger than 4 must satisfy this); second, for n>257, it checks that n does not contain small prime factors p<=257. This is checked by evaluating the GCD of n with the precomputed product of all primes up to 257. The computation of the GCD is quick and saves time in case a small prime factor is present.
There is also a function NextPrime(n) that returns the smallest prime number larger than n. This function uses a sequence 5,7,11,13,... generated by the function NextPseudoPrime. This sequence contains numbers not divisible by 2 or 3 (but perhaps divisible by 5,7,...). The function NextPseudoPrime is very fast because it does not perform a full primality test.
The function NextPrime however does check each of these pseudoprimes using IsPrime and finds the first prime number.
First we determine whether the number n contains "small" prime factors p<=257. A quick test is to find the GCD of n and the product of all primes up to 257: if the GCD is greater than 1, then n has at least one small prime factor. (The product of primes is precomputed.) If this is the case, the trial division algorithm is used: n is divided by all prime numbers p<=257 until a factor is found. NextPseudoPrime is used to generate the sequence of candidate divisors p.
After separating small prime factors, we test whether the number n is an integer power of a prime number, i.e. whether n=p^s for some prime number p and an integer s>=1. This is tested by the following algorithm. We already know that n is not prime and that n does not contain any small prime factors up to 257. Therefore if n=p^s, then p>257 and 2<=s<s[0]=Ln(n)/Ln(257). In other words, we only need to look for powers not greater than s[0]. This number can be approximated by the "integer logarithm" of n in base 257 (routine IntLog(n, 257)).
Now we need to check whether n is of the form p^s for s=2, 3, ..., s[0]. Note that if for example n=p^24 for some p, then the square root of n will already be an integer, n^(1/2)=p^12. Therefore it is enough to test whether n^(1/s) is an integer for all prime values of s up to s[0], and then we will definitely discover whether n is a power of some other integer. The testing is performed using the integer square root function IntNthRoot which quickly computes the integer part of n-th root of an integer number. If we discover that n has an integer root p of order s, we have to check that p itself is a prime power (we use the same algorithm recursively). The number n is a prime power if and only if p is itself a prime power. If we find no integer roots of orders s<=s[0], then n is not a prime power.
If the number n is not a prime power, the Pollard "rho" algorithm is applied [Pollard 1978]. The Pollard "rho" algorithm takes an irreducible polynomial, e.g. p(x)=x^2+1 and builds a sequence of integers x[k+1]:=Mod(p(x[k]),n), starting from x[0]=2. For each k, the value x[2*k]-x[k] is attempted as possibly containing a common factor with n. The GCD of x[2*k]-x[k] with n is computed, and if Gcd(x[2*k]-x[k],n)>1, then that GCD value divides n.
The idea behind the "rho" algorithm is to generate an effectively random sequence of trial numbers t[k] that may have a common factor with n. The efficiency of this algorithm is determined by the size of the smallest factor p of n. Suppose p is the smallest prime factor of n and suppose we generate a random sequence of integers t[k] such that 1<=t[k]<n. It is clear that, on the average, a fraction 1/p of these integers will be divisible by p. Therefore (if t[k] are truly random) we should need on the average p tries until we find t[k] which is accidentally divisible by p. In practice, of course, we do not use a truly random sequence and the number of tries before we find a factor p may be significantly different from p. The quadratic polynomial seems to help reduce the number of tries in most cases.
But the Pollard "rho" algorithm may actually enter an infinite loop when the sequence x[k] repeats itself without giving any factors of n. For example, the unmodified "rho" algorithm starting from x[0]=2 loops on the number 703. The loop is detected by comparing x[2*k] and x[k]. When these two quantities become equal to each other for the first time, the loop may not yet have occurred so the value of GCD is set to 1 and the sequence is continued. But when the equality of x[2*k] and x[k] occurs many times, it indicates that the algorithm has entered a loop. A solution is to randomly choose a different starting number x[0] when a loop occurs and try factoring again, and keep trying new random starting numbers between 1 and n until a non-looping sequence is found. The current implementation stops after 100 restart attempts and prints an error message, "failed to factorize number".
A better (and faster) integer factoring algorithm needs to be implemented in Yacas.
Modern factoring algorithms are all probabilistic (i.e. they do not guarantee a particular finishing time) and fall into three categories:
There is ample literature describing these algorithms.
The Legendre symbol (m/ n) is defined as +1 if m is a quadratic residue modulo n and -1 if it is a non-residue. The Legendre symbol is equal to 0 if m/n is an integer.
The Jacobi symbol [m/n;] is defined as the product of the Legendre symbols of the prime factors f[i] of n=f[1]^p[1]*...*f[s]^p[s],
The Jacobi symbol can be efficiently computed without knowing the full factorization of the number n. The currently used method is based on the following four identities for the Jacobi symbol:
Using these identities, we can recursively reduce the computation of the Jacobi symbol [a/b;] to the computation of the Jacobi symbol for numbers that are on the average half as large. This is similar to the fast "binary" Euclidean algorithm for the computation of the GCD. The number of levels of recursion is logarithmic in the arguments a, b.
More formally, Jacobi symbol [a/b;] is computed by the following algorithm. (The number b must be an odd positive integer, otherwise the result is undefined.)
Note that the arguments a, b may be very large integers and we should avoid performing multiplications of these numbers. We can compute (-1)^((b-1)*(c-1)/4) without multiplications. This expression is equal to 1 if either b or c is equal to 1 mod 4; it is equal to -1 only if both b and c are equal to 3 mod 4. Also, (-1)^((b^2-1)/8) is equal to 1 if either b:=1 or b:=7 mod 8, and it is equal to -1 if b:=3 or b:=5 mod 8. Of course, if s is even, none of this needs to be computed.
The first term of the series gives, at large n, the Hardy-Ramanujan asymptotic estimate,
There exist estimates of the error of this series, but they are complicated. The series is sufficiently well-behaved and it is easier to determine the truncation point heuristically. Each term of the series is either 0 (when all terms in A(k,n) happen to cancel) or has a magnitude which is not very much larger than the magnitude of the previous nonzero term. (But the series is not actually monotonic.) In the current implementation, the series is truncated when Abs(A(k,n)*S(n)*Sqrt(k)) becomes smaller than 0.1 for the first time; in any case, the maximum number of calculated terms is 5+Sqrt(n)/2. One can show that asymptotically for large n, the required number of terms is less than mu/Ln(mu), where mu:=Pi*Sqrt((2*n)/3).
[Ahlgren et al. 2001] mention that there exist explicit constants B[1] and B[2] such that
The floating-point precision necessary to obtain the integer result must be at least the number of digits in the first term P _0(n), i.e.
The RHR algorithm requires O((n/Ln(n))^(3/2)) operations, of which O(n/Ln(n)) are long multiplications at precision Prec<>O(Sqrt(n)) digits. The computational cost is therefore O(n/Ln(n)*M(Sqrt(n))).
The sum is actually not over all k up to n but is truncated when the pentagonal sequence grows above n. Therefore, it contains only O(Sqrt(n)) terms. However, computing P(n) using the recurrence relation requires computing and storing P(k) for all 1<=k<=n. No long multiplications are necessary, but the number of long additions of numbers with Prec<>O(Sqrt(n)) digits is O(n^(3/2)). Therefore the computational cost is O(n^2). This is asymptotically slower than the RHR algorithm even if a slow O(n^2) multiplication is used. With internal Yacas math, the recurrence relation is faster for n<300 or so, and for larger n the RHR algorithm is faster.
Let p[1]^k[1]*...*p[r]^k[r] be the prime factorization of n, where r is the number of prime factors and k[r] is the multiplicity of the r-th factor. Then
The functions ProperDivisors and ProperDivisorsSum are functions that do the same as the above functions, except they do not consider the number n as a divisor for itself. These functions are defined by:
ProperDivisors(n)=Divisors(n)-1,
ProperDivisorsSum(n)=DivisorsSum(n)-n.
Another number-theoretic function is Moebius, defined as follows: Moebius(n)=(-1)^r if no factors of n are repeated, Moebius(n)=0 if some factors are repeated, and Moebius(n)=1 if n=1. This again requires to factor the number n completely and investigate the properties of its prime factors. From the definition, it can be seen that if n is prime, then Moebius(n)= -1. The predicate IsSquareFree(n) then reduces to Moebius(n)!=0, which means that no factors of n are repeated.
The function GaussianNorm computes the norm N(z)=a^2+b^2 of z. The norm plays a fundamental role in the arithmetic of Gaussian integers, since it has the multiplicative property:
A unit of a ring is an element that divides any other element of the ring. There are four units in the Gaussian integers: 1, -1, I, -I. They are exactly the Gaussian integers whose norm is 1. The predicate IsGaussianUnit tests for a Gaussian unit.
Two Gaussian integers z and w are "associated" is z/w is a unit. For example, 2+I and -1+2*I are associated.
A Gaussian integer is called prime if it is only divisible by the units and by its associates. It can be shown that the primes in the ring of Gaussian integers are:
For example, 7 is prime as a Gaussian integer, while 5 is not, since 5=(2+I)*(2-I). Here 2+I is a Gaussian prime.
The ring of Gaussian integers is an example of an Euclidean ring, i.e. a ring where there is a division algorithm. This makes it possible to compute the greatest common divisor using Euclid's algorithm. This is what the function GaussianGcd computes.
As a consequence, one can prove a version of the fundamental theorem of arithmetic for this ring: The expression of a Gaussian integer as a product of primes is unique, apart from the order of primes, the presence of units, and the ambiguities between associated primes.
The function GaussianFactors finds this expression of a Gaussian integer z as the product of Gaussian primes, and returns the result as a list of pairs {p,e}, where p is a Gaussian prime and e is the corresponding exponent. To do that, an auxiliary function called GaussianFactorPrime is used. This function finds a factor of a rational prime of the form 4*n+1. We compute a:=(2*n)! (mod p). By Wilson's theorem a^2 is congruent to -1 (mod p), and it follows that p divides (a+I)*(a-I)=a^2+1 in the Gaussian integers. The desired factor is then the GaussianGcd of a+I and p. If the result is a+b*I, then p=a^2+b^2.
If z is a rational (i.e. real) integer, we factor z in the Gaussian integers by first factoring it in the rational integers, and after that by factoring each of the integer prime factors in the Gaussian integers.
If z is not a rational integer, we find its possible Gaussian prime factors by first factoring its norm N(z) and then computing the exponent of each of the factors of N(z) in the decomposition of z.
A simple factorization algorithm is developed for univariate polynomials. This algorithm is implemented as the function BinaryFactors. The algorithm was named the binary factoring algorithm since it determines factors to a polynomial modulo 2^n for successive values of n, effectively adding one binary digit to the solution in each iteration. No reference to this algorithm has been found so far in literature.
Berlekamp showed that polynomials can be efficiently factored when arithmetic is done modulo a prime. The Berlekamp algorithm is only efficient for small primes, but after that Hensel lifting can be used to determine the factors modulo larger numbers.
The algorithm presented here is similar in approach to applying the Berlekamp algorithm to factor modulo a small prime, and then factoring modulo powers of this prime (using the solutions found modulo the small prime by the Berlekamp algorithm) by applying Hensel lifting. However it is simpler in set up. It factors modulo 2, by trying all possible factors modulo 2 (two possibilities, if the polynomial is monic). This performs the same action usually left to the Berlekamp step. After that, given a solution modulo 2^n, it will test for a solution f _i modulo 2^n if f _i or f _i+2^n are a solution modulo 2^(n+1).
This scheme raises the precision of the solution with one digit in binary representation. This is similar to the linear Hensel lifting algorithm, which factors modulo p^n for some prime p, where n increases by one after each iteration. There is also a quadratic version of Hensel lifting which factors modulo p^2^n, in effect doubling the number of digits (in p-adic expansion) of the solution after each iteration. However, according to "Davenport", the quadratic algorithm is not necessarily faster.
The algorithm here thus should be equivalent in complexity to Hensel lifting linear version. This has not been verified yet.
Arithmetic modulo an integer p requires performing the arithmetic operation and afterwards determining that integer modulo p. A number x can be written as
When Mod(x,p)=Mod(y,p), the notation Mod(x=y,p) is used. All arithmetic calculations are done modulo an integer p in that case.
For calculations modulo some p the following rules hold:
For polynomials v _1(x) and v _2(x) it further holds that
An interesting corollary to this is that, for some prime integer p:
into a form
Where f _i(x) are irreducible polynomials of the form:
The part that could not be factorized is returned as g(x), with a possible constant factor C.
The factors f _i(x) and g(x) are determined uniquely by requiring them to be monic. The constant C accounts for a common factor.
The c _i constants in the resulting solutions f _i(x) can be rational numbers (or even complex numbers, if Gaussian integers are used).
The polynomial now only has integer coefficients.
The polynomial is now a monic polynomial in y.
After factoring, the irreducible factors of p(x) can be obtained by multiplying C with 1/a _n^(n-1), and replacing y with a _n*x. The irreducible solutions a _n*x+c _i can be replaced by x+c _i/a _i after multiplying C by a _n, converting the factors to monic factors.
After the steps described here the polynomial is now monic with integer coefficients, and the factorization of this polynomial can be used to determine the factors of the original polynomial p(x).
for some polymomial d(x) to be divided by, modulo some integer p. d(x) is said to divide p(x) (modulo p) if r(x) is zero. It is then a factor modulo p.
For binary factoring algorithm it is important that if some monic d(x) divides p(x), then it also divides p(x) modulo some integer p.
Define deg(f(x)) to be the degree of f(x) and lc(f(x)) to be the leading coefficient of f(x). Then, if deg(p(x))>=deg(d(x)), one can compute an integer s such that
If p is prime, then
Because Mod(a^(p-1)=1,p) for any a. If p is not prime but d(x) is monic (and thus lc(d(x))=1),
This identity can also be used when dividing in general (not modulo some integer), since the divisor is monic.
The quotient can then be updated by adding a term:
term=s*x^(deg(p(x))-deg(d(x)))
and updating the polynomial to be divided, p(x), by subtracting d(x)*term. The resulting polynomial to be divided now has a degree one smaller than the previous.
When the degree of p(x) is less than the degree of d(x) it is returned as the remainder.
A full division algorithm for arbitrary integer p>1 with lc(d(x))=1 would thus look like:
divide(p(x),d(x),p) q(x) = 0 r(x) = p(x) while (deg(r(x)) >= deg(d(x))) s = lc(r(x)) term = s*x^(deg(r(x))-deg(d(x))) q(x) = q(x) + term r(x) = r(x) - term*d(x) mod p return {q(x),r(x)} |
The reason we can get away with factoring modulo 2^n as opposed to factoring modulo some prime p in later sections is that the divisor d(x) is monic. Its leading coefficient is one and thus q(x) and r(x) can be uniquely determined. If p is not prime and lc(d(x)) is not equal to one, there might be multiple combinations for which p(x)=q(x)*d(x)+r(x), and we are interested in the combinations where r(x) is zero. This can be costly to determine unless q(x),r(x) is unique. This is the case here because we are factoring a monic polynomial, and are thus only interested in cases where lc(d(x))=1.
It will be factored into a form:
where all factors f _i(x) are monic also.
The algorithm starts by setting up a test polynomial, p _test(x) which divides p(x), but has the property that
Such a polynomial is said to be square-free. It has the same factors as the original polynomial, but the original might have multiple of each factor, where p _test(x) does not.
The square-free part of a polynomial can be obtained as follows:
It can be seen by simply writing this out that p(x) and D(x)p(x) will have factors f _i(x)^(p _i-1) in common. these can thus be divided out.
It is not a requirement of the algorithm that the algorithm being worked with is square-free, but it speeds up computations to work with the square-free part of the polynomial if the only thing sought after is the set of factors. The multiplicity of the factors can be determined using the original p(x).
Binary factoring then proceeds by trying to find potential solutions modulo p=2 first. There can only be two such solutions: x+0 and x+1.
A list of possible solutions L is set up with potential solutions.
If an element in L divides p _test(x), p _test(x) is divided by it, and a loop is entered to test how often it divides p(x) to determine the multiplicity p _i of the factor. The found factor f _i(x)=x+c _i is added as a combination ( x+c _i, p _i). p(x) is divided by f _i(x)^p _i.
At this point there is a list L of factors that divide p _test(x) modulo 2^n. This implies that for each of the elements u in L, either u or u+2^n should divide p _test(x) modulo 2^(n+1). The following step is thus to set up a new list with new elements that divide p _test(x) modulo 2^(n+1).
The loop is re-entered, this time doing the calculation modulo 2^(n+1) instead of modulo 2^n.
The loop is terminated if the number of factors found equals deg(p _test(x)), or if 2^n is larger than the smallest non-zero coefficient of p _test(x) as this smallest non-zero coefficient is the product of all the smallest non-zero coefficients of the factors, or if the list of potential factors is zero.
The polynomial p(x) can not be factored any further, and is added as a factor (p(x), 1).
The function BinaryFactors, when implemented, yields the following interaction in Yacas:
In> BinaryFactors((x+1)^4*(x-3)^2) Out> {{x-3,2},{x+1,4}} In> BinaryFactors((x-1/5)*(2*x+1/3)) Out> {{2,1},{x-1/5,1},{x+1/6,1}} In> BinaryFactors((x-1123125)*(2*x+123233)) Out> {{2,1},{x-1123125,1},{x+123233/2,1}} |
The binary factoring algorithm starts with a factorization modulo 2, and then each time tries to guess the next bit of the solution, maintaining a list of potential solutions. This list can grow exponentially in certain instances. For instance, factoring (x-a)*(x-2*a)*(x-3*a)*... implies a that the roots have common factors. There are inputs where the number of potential solutions (almost) doubles with each iteration. For these inputs the algorithm becomes exponential. The worst-case performance is therefore exponential. The list of potential solutions while iterating will contain a lot of false roots in that case.
For the initial solutions modulo 2, where the possible solutions are x and x-1. For p=0, rem(0)=a _0. For p=1, rem(1)=Sum(i,0,n,a _i) .
Given a solution x-p modulo q=2^n, we consider the possible solutions Mod(x-p,2^(n+1)) and Mod(x-(p+2^n),2^n+1).
x-p is a possible solution if Mod(rem(p),2^(n+1))=0.
x-(p+q) is a possible solution if Mod(rem(p+q),2^(n+1))=0. Expanding Mod(rem(p+q),2*q) yields:
When expanding this expression, some terms grouped under extra(p,q) have factors like 2*q or q^2. Since q=2^n, these terms vanish if the calculation is done modulo 2^(n+1).
The expression for extra(p,q) then becomes
An efficient approach to determining if x-p or x-(p+q) divides p(x) modulo 2^(n+1) is then to first calculate Mod(rem(p),2*q). If this is zero, x-p divides p(x). In addition, if Mod(rem(p)+extra(p,q),2*q) is zero, x-(p+q) is a potential candidate.
Other efficiencies are derived from the fact that the operations are done in binary. Eg. if q=2^n, then q _next=2^(n+1)=2*q=q<<1 is used in the next iteration. Also, calculations modulo 2^n are equivalent to performing a bitwise and with 2^n-1. These operations can in general be performed efficiently on todays hardware which is based on binary representations.
For this to work the division algorithm would have to be extended to handle complex numbers with integer a and b modulo some integer, and the initial setup of the potential solutions would have to be extended to try x+1+I and x+I also. The step where new potential solutions modulo 2^(n+1) are determined should then also test for x+I*2^n and x+2^n+I*2^n.
The same extension could be made for multivariate polynomials, although setting up the initial irreducible polynomials that divide p _test(x) modulo 2 might become expensive if done on a polynomial with many variables (2^(2^m-1) trials for m variables).
Lastly, polynomials with real-valued coefficients could be factored, if the coefficients were first converted to rational numbers. However, for real-valued coefficients there exist other methods (Sturm sequences).
Newton iteration is based on the following idea: when one takes a Taylor series expansion of a function:
Newton iteration then proceeds by taking only the first two terms in this series, the constant plus the constant times dx. Given some good initial value x _0, the function will is assumed to be close to a root, and the function is assumed to be almost linear, hence this approximation. Under these assumptions, if we want f(x _0+dx) to be zero,
This yields:
And thus a next, better, approximation for the root is x[1]:=x _0-f(x[0])/(D(x)f(x[0])), or more general:
If the root has multiplicity one, a Newton iteration can converge quadratically, meaning the number of decimals precision for each iteration doubles.
As an example, we can try to find a root of Sin(x) near 3, which should converge to Pi.
Setting precision to 30 digits,
In> Precision(30) Out> True; |
We first set up a function dx(x):
In> dx(x):=Eval(-Sin(x)/(D(x)Sin(x))) Out> True; |
And we start with a good initial approximation to Pi, namely 3. Note we should set x after we set dx(x), as the right hand side of the function definition is evaluated. We could also have used a different parameter name for the definition of the function dx(x).
In> x:=3 Out> 3; |
We can now start the iteration:
In> x:=N(x+dx(x)) Out> 3.142546543074277805295635410534; In> x:=N(x+dx(x)) Out> 3.14159265330047681544988577172; In> x:=N(x+dx(x)) Out> 3.141592653589793238462643383287; In> x:=N(x+dx(x)) Out> 3.14159265358979323846264338328; In> x:=N(x+dx(x)) Out> 3.14159265358979323846264338328; |
As shown, in this example the iteration converges quite quickly.
Given N functions in N variables, we want to solve
for i=1..N. If de denote by X the vector $$ X := x[1],x[2],...,x[N] $$
and by dX the delta vector, then one can write
Setting f _i(X+dX) to zero, one obtains
where
and
So the generalization is to first initialize X to a good initial value, calculate the matrix elements a[i][j] and the vector b[i], and then to proceed to calculate dX by solving the matrix equation, and calculating
In the case of one function with one variable, the summation reduces to one term, so this linear set of equations was a lot simpler in that case. In this case we will have to solve this set of linear equations in each iteration.
As an example, suppose we want to find the zeroes for the following two functions:
and
It is clear that the solution to this is a=2 and x:=N*Pi/2 for any integer value N.
We will do calculations with precision 30:
In> Precision(30) Out> True; |
And set up a vector of functions $f_1(X),f_2(X)$ where $X:=a,x$
In> f(a,x):={Sin(a*x),a-2} Out> True; |
Now we set up a function matrix(a,x) which returns the matrix a[i][j]:
In> matrix(a,x):=Eval({D(a)f(a,x),D(x)f(a,x)}) Out> True; |
We now set up some initial values:
In> {a,x}:={1.5,1.5} Out> {1.5,1.5}; |
The iteration converges a lot slower for this example, so we will loop 100 times:
In> For(ii:=1,ii<100,ii++)[{a,x}:={a,x}+\ N(SolveMatrix(matrix(a,x),-f(a,x)));] Out> True; In> {a,x} Out> {2.,0.059667311457823162437151576236}; |
The value for a has already been found. Iterating a few more times:
In> For(ii:=1,ii<100,ii++)[{a,x}:={a,x}+\ N(SolveMatrix(matrix(a,x),-f(a,x)));] Out> True; In> {a,x} Out> {2.,-0.042792753588155918852832259721}; In> For(ii:=1,ii<100,ii++)[{a,x}:={a,x}+\ N(SolveMatrix(matrix(a,x),-f(a,x)));] Out> True; In> {a,x} Out> {2.,0.035119151349413516969586788023}; |
the value for x converges a lot slower this time, and to the uninteresting value of zero (a rather trivial zero of this set of functions). In fact for all integer values N the value N*Pi/2 is a solution. Trying various initial values will find them.
Applying a Newton iteration step g[i+1]=g[i]-h(g[i])/(D(g)h(g[i])) to this expression yields:
von zur Gathen then proves by induction that for f(x) monic, and thus f(0)=1, given initial value g _0(x)=1, that
Example:
suppose we want to find the polynomial g(x) up to the 7th degree for which Mod(f(x)*g(x)=1,x^8), for the function
First we define the function f:
In> f:=1+x+x^2/2+x^3/6+x^4/24 Out> x+x^2/2+x^3/6+x^4/24+1; |
And initialize g and i.
In> g:=1 Out> 1; In> i:=0 Out> 0; |
Now we iterate, increasing i, and replacing g with the new value for g:
In> [i++;g:=BigOh(2*g-f*g^2,x,2^i);] Out> 1-x; In> [i++;g:=BigOh(2*g-f*g^2,x,2^i);] Out> x^2/2-x^3/6-x+1; In> [i++;g:=BigOh(2*g-f*g^2,x,2^i);] Out> x^7/72-x^6/72+x^4/24-x^3/6+x^2/2-x+1; |
The resulting expression must thus be:
We can easily verify this:
In> Expand(f*g) Out> x^11/1728+x^10/576+x^9/216+(5*x^8)/576+1; |
This expression is 1 modulo x^8, as can easily be shown:
In> BigOh(%,x,8) Out> 1; |
There are two tasks related to preparation of plots of functions: first, to produce the numbers required for a plot, and second, to draw a plot with axes, symbols, a legend, perhaps additional illustrations and so on. Here we only concern ourselves with the first task, that of preparation of the numerical data for a plot. There are many plotting programs that can read a file with numbers and plot it in any desired manner.
Generating data for plots of functions generally does not require high-precision calculations. However, we need an algorithm that can be adjusted to produce data to different levels of precision. In some particularly ill-behaved cases, a precise plot will not be possible and we would not want to waste time producing data that is too accurate for what it is worth.
A simple approach to plotting would be to divide the interval into many equal subintervals and to evaluate the function on the resulting grid. Precision of the plot can be adjusted by choosing a larger or a smaller number of points.
However, this approach is not optimal. Sometimes a function changes rapidly near one point but slowly everywhere else. For example, f(x)=1/x changes very quickly at small x. Suppose we need to plot this function between 0 and 100. It would be wasteful to use the same subdivision interval everywhere: a finer grid is only required over a small portion of the plotting range near x=0.
The adaptive plotting routine Plot2D'adaptive uses a simple algorithm to select the optimal grid to approximate a function of one argument f(x). The algorithm repeatedly subdivides the grid intervals near points where the existing grid does not represent the function well enough. A similar algorithm for adaptive grid refinement could be used for numerical integration. The idea is that plotting and numerical integration require the same kind of detailed knowledge about the behavior of the function.
The algorithm first splits the interval into a specified initial number of equal subintervals, and then repeatedly splits each subinterval in half until the function is well enough approximated by the resulting grid. The integer parameter depth gives the maximum number of binary splittings for a given initial interval; thus, at most 2^depth additional grid points will be generated. The function Plot2D'adaptive should return a list of pairs of points {{x1,y1}, {x2,y2}, ...} to be used directly for plotting.
The adaptive plotting algorithm works like this:
This algorithm works well if the initial number of points and the depth parameter are large enough. These parameters can be adjusted to balance the available computing time and the desired level of detail in the resulting plot.
Singularities in the function are handled by the step 3. Namely, the change in the sequence a, a[1], b, b[1], c is always considered to be "too rapid" if one of these values is a non-number (e.g. Infinity or Undefined). Thus, the interval immediately adjacent to a singularity will be plotted at the highest allowed refinement level. When preparing the plotting data, the singular points are simply not printed to the data file, so that a plotting programs does not encounter any problems.
The coefficients c[k] for grids with a constant step h can be found, for example, by solving the following system of equations,
In the same way it is possible to find quadratures for the integral over a subinterval rather than over the whole interval of x. In the current implementation of the adaptive plotting algorithm, two quadratures are used: the 3-point quadrature ( n=2) and the 4-point quadrature ( n=3) for the integral over the first subinterval, Integrate(x,a[0],a[1])f(x). Their coefficients are (5/12, 2/3, -1/12) and ( 3/8, 19/24, -5/24, 1/24). An example of using the first of these subinterval quadratures would be the approximation
The task of surface plotting is to obtain a picture of a two-dimensional surface as if it were a solid object in three dimensions. A graphical representation of a surface is a complicated task. Sometimes it is required to use particular coordinates or projections, to colorize the surface, to remove hidden lines and so on. We shall only be concerned with the task of obtaining the data for a plot from a given function of two variables f(x,y). Specialized programs can take a text file with the data and let the user interactively produce a variety of surface plots.
The currently implemented algorithm in the function Plot3DS is very similar to the adaptive plotting algorithm for two-dimensional plots. A given rectangular plotting region a[1]<=x<=a[2], b[1]<=y<=b[2] is subdivided to produce an equally spaced rectangular grid of points. This is the initial grid which will be adaptively refined where necessary. The refinement algorithm will divide a given rectangle in four quarters if the available function values indicate that the function does not change smoothly enough on that rectangle.
The criterion of a "smooth enough" change is very similar to the procedure outlined in the previous section. The change is "smooth enough" if all points are finite, nonsingular values, and if the integral of the function over the rectangle is sufficiently well approximated by a certain low-order "cubature" formula.
The two-dimensional integral of the function is estimated using the following 5-point Newton-Cotes cubature:
1/12 0 1/12 0 2/3 0 1/12 0 1/12 |
An example of using this cubature would be the approximation
Similarly, an 8-point cubature with zero sum is used to estimate the error:
-1/3 2/3 1/6 -1/6 -2/3 -1/2 1/2 0 1/3 |
One minor problem with adaptive surface plotting is that the resulting set of points may not correspond to a rectangular grid in the parameter space (x, y). This is because some rectangles from the initial grid will need to be bisected more times than others. So, unless adaptive refinement is disabled, the function Plot3DS produces a somewhat disordered set of points. However, most surface plotting programs require that the set of data points be a rectangular grid in the parameter space. So a smoothing and interpolation procedure is necessary to convert a non-gridded set of data points ("scattered" data) to a gridded set.
The program gnuplot has this facility (the "set dgrid3d" command), although its implementation may not be optimal for all purposes. The current solution is to use the dgrid3d command and to generate a grid at the highest level of bisection ever used during the adaptive refinement process. This quite possibly generates many more grid points than necessary, but a more optimal solution would be more much more time-consuming or would require a specialized external program. One such set of programs is the GMT ("generic mapping tools") utility suite.
A two-dimensional parametric plot is a line in a two-dimensional space, defined by two equations such as x=f(t), y=g(t). Two functions f, g and a range of the independent variable t, for example, t[1]<=t<=t[2], need to be specified.
Parametric plots can be used to represent plots of functions in non-Euclidean coordinates. For example, to plot the function rho=Cos(4*phi)^2 in polar coordinates ( rho, phi), one can rewrite the Euclidean coordinates through the polar coordinates, x=rho*Cos(phi), y=rho*Sin(phi), and use the equivalent parametric plot with phi as the parameter: x=Cos(4*phi)^2*Cos(phi), y=Cos(4*phi)^2*Sin(phi).
Sometimes higher-dimensional parametric plots are required. A line plot in three dimensions is defined by three functions of one variable, for example, x=f(t), y=g(t), z=h(t), and a range of the parameter t. A surface plot in three dimensions is defined by three functions of two variables each, for example, x=f(u,v), y=g(u,v), z=h(u,v), and a rectangular domain in the (u, v) space.
The data for parametric plots can be generated separately using the same adaptive plotting algorithms as for ordinary function plots, as if all functions such as f(t) or g(u,v) were unrelated functions. The result would be several separate data sets for the x, y, ... coordinates. These data sets could then be combined using an interactive plotting program.
A different question is whether a CAS really needs to be able to evaluate, say, 10,000 digits of the value of a Bessel function of some 10,000-digit complex argument. It seems likely that no applied problem of natural sciences would need floating-point computations of special functions with such a high precision. However, arbitrary-precision computations are certainly useful in some mathematical applications; e.g. some mathematical identities can be first guessed by a floating-point computation with many digits and then proved.
Very high precision computations of special functions might be useful in the future. But it is already quite clear that computations with moderately high precision (say, 50 or 100 decimal digits) are useful for applied problems. For example, to obtain the leading asymptotic of an analytic function, we could expand it in series and take the first term. But we need to check that the coefficient at what we think is the leading term of the series does not vanish. This coefficient could be a certain "exact" number such as (Cos(355)+1)^2. This number is "exact" in the sense that it is made of integers and elementary functions. But we cannot say a priori that this number is nonzero. The problem of "zero determination" (finding out whether a certain "exact" number is zero) is known to be algorithmically unsolvable if we allow transcendental functions. The only practical general approach seems to be to compute the number in question with many digits. Usually a few digits are enough, but occasionally several hundred digits are needed.
Implementing an efficient algorithm that computes 100 digits of Sin(3/7) already involves many of the issues that would also be relevant for a 10,000 digit computation. Modern algorithms allow evaluations of all elementary functions in time that is asymptotically logarithmic in the number of digits P and linear in the cost of long multiplication (usually denoted M(P)). Almost all special functions can be evaluated in time that is asymptotically linear in P and in M(P). (However, this asymptotic cost sometimes applies only to very high precision, e.g., P>1000, and different algorithms need to be implemented for calculations in lower precision.)
In Yacas we strive to implement all numerical functions to arbitrary precision. All integer or rational functions return exact results, and all floating-point functions return their value with P correct decimal digits (assuming sufficient precision of the arguments). The current value of P is accessed as GetPrecision() and may be changed by Precision(...).
Implementing an arbitrary-precision floating-point computation of a function f(x), such as f(x)=Exp(x), typically needs the following:
In calculations with machine precision where the number of digits is fixed, the problem of round-off errors is quite prominent. Every arithmetic operation causes a small loss of precision; as a result, a few last digits of the final value are usually incorrect. But if we have an arbitrary precision capability, we can always increase precision by a few more digits during intermediate computations and thus eliminate all round-off error in the final result. We should, of course, take care not to increase the working precision unnecessarily, because any increase of precision means slower calculations. Taking twice as many digits as needed and hoping that the result is precise is not a good solution.
Selecting algorithms for computations is the most non-trivial part of the implementation. We want to achieve arbitrarily high precision, so we need to find either a series, or a continued fraction, or a sequence given by explicit formula, that converges to the function in a controlled way. It is not enough to use a table of precomputed values or a fixed approximation formula that has a limited precision.
In the last 30 years, the interest in arbitrary-precision computations grew and many efficient algorithms for elementary and special functions were published. Most algorithms are iterative. Almost always it is very important to know in advance how many iterations are needed for given x, P. This knowledge allows to estimate the computational cost, in terms of the required precision P and of the cost of long multiplication M(P), and choose the best algorithm.
Typically all operations will fall into one of the following categories (sorted by the increasing cost):
The cost of long multiplication M(P) is between O(P^2) for low precision and O(P*Ln(P)) for very high precision. In some cases, a different algorithm should be chosen if the precision is high enough to allow M(P) faster than O(P^2).
Some algorithms also need storage space (e.g. an efficient algorithm for summation of the Taylor series uses O(Ln(P)) temporary P-digit numbers).
Below we shall normally denote by P the required number of decimal digits. The formulae frequently contain conspicuous factors of Ln(10), so it will be clear how to obtain analogous expressions for another base. (Most implementations use a binary base rather than a decimal base since it is more convenient for many calculations.)
Suppose we truncate the series after n-th term and the series converges "well enough" after that term. Then the error will be approximately equal to the first term we dropped. (This is what we really mean by "converges well enough" and this will generally be the case in all applications, because we would not want to use a series that does not converge well enough.)
The term we dropped is x^(n+1)/(n+1)!. To estimate n! for large n, one can use the inequality
If we use the upper bound on n! from this estimate, we find that the term we dropped is bounded by
We can try to guess the result. The largest term on the LHS grows as n0*Ln(n0) and it should be approximately equal to P*Ln(10); but Ln(n0) grows very slowly, so this gives us a hint that n0 is proportional to P*Ln(10). As a first try, we set n0=P*Ln(10)-2 and compare the RHS with the LHS; we find that we have overshot by a factor Ln(P)-1+Ln(Ln(10)), which is not a large factor. We can now compensate and divide n0 by this factor, so our second try is
Our final result is that it is enough to take
Here is a simple estimate of the normal round-off error in a computation of n terms of a power series. Suppose that the sum of the series is of order 1, that the terms monotonically decrease in magnitude, and that adding one term requires two multiplications and one addition. If all calculations are performed with absolute precision epsilon=10^(-P), then the total accumulated round-off error is 3*n*epsilon. If the relative error is 3*n*epsilon, it means that our answer is something like a*(1+3*n*epsilon) where a is the correct answer. We can see that out of the total P digits of this answer, only the first k decimal digits are correct, where k= -Ln(3*n*epsilon)/Ln(10). In other words, we have lost
This estimate assumes several things about the series (basically, that the series is "well-behaved"). These assumptions must be verified in each particular case. For example, if the series begins with some large terms but converges to a very small value, this estimate is wrong (see the next subsection).
In the previous exercise we found the number of terms n for Exp(x). So now we know how many extra digits of working precision we need for this particular case.
Below we shall have to perform similar estimates of the required number of terms and of the accumulated round-off error in our analysis of the algorithms.
Consider the computation of Sin(x) by the truncated Taylor series
First, we determine the necessary number of terms N. The magnitude of the sum is never larger than 1. Therefore we need the N-th term of the series to be smaller than 10^(-P). The inequality is (2*N+1)! >10^(P+M*(2*N+1)). We obtain that 2*N+2>e*10^M is a necessary condition, and if P is large, we find approximately
However, taking enough terms does not yet guarantee a good result. The terms of the series grow at first and then start to decrease. The sum of these terms is, however, small. Therefore there is some cancellation and we need to increase the working precision to avoid the round-off. Let us estimate the required working precision.
We need to find the magnitude of the largest term of the series. The ratio of the next term to the previous term is x/(2*k*(2*k+1)) and therefore the maximum will be when this ratio becomes equal to 1, i.e. for 2*k<=>Sqrt(x). Therefore the largest term is of order x^Sqrt(x)/Sqrt(x)! and so we need about M/2*Sqrt(x) decimal digits before the decimal point to represent this term. But we also need to keep at least P digits after the decimal point, or else the round-off error will erase the significant digits of the result. In addition, we will have unavoidable round-off error due to O(P) arithmetic operations. So we should increase precision again by P+Ln(P)/Ln(10) digits plus a few guard digits.
As an example, to compute Sin(10) to P=50 decimal digits with this method, we need a working precision of about 60 digits, while to compute Sin(10000) we need to work with about 260 digits. This shows how inefficient the Taylor series for Sin(x) becomes for large arguments x. A simple transformation x=2*Pi*n+x' would have reduced x to at most 7, and the unnecessary computations with 260 digits would be avoided. The main cause of this inefficiency is that we have to add and subtract extremely large numbers to get a relatively small result of order 1.
We find that the method of Taylor series for Sin(x) at large x is highly inefficient because of round-off error and should be complemented by other methods. This situation seems to be typical for Taylor series.
The algorithms for basic arithmetic in the internal math version are currently rather slow compared with gmp. If P is the number of digits of precision, then multiplication and division take M(P)=O(P^2) operations in the internal math. (Of course, multiplication and division by a short integer takes time linear in P.) Much faster algorithms (Karatsuba, Toom-Cook, FFT multiplication, Newton-Raphson division etc.) are implemented in gmp, CLN and some other libraries. The asymptotic cost of multiplication for very large precision is M(P)<=>O(P^1.6) for the Karatsuba method and M(P)=O(P*Ln(P)*Ln(Ln(P))) for the FFT method. In the estimates of computation cost in this book we shall assume that M(P) is at least linear in P and maybe a bit slower.
The costs of multiplication may be different in various arbitrary-precision arithmetic libraries and on different computer platforms. As a rough guide, one can assume that the straightforward O(P^2) multiplication is good until 100-200 decimal digits, the asymptotically fastest method of FFT multiplication is good at the precision of about 5,000 or more decimal digits, and the Karatsuba multiplication is best in the middle range.
Warning: calculations with internal Yacas math using precision exceeding 10,000 digits are currently impractically slow.
In some algorithms it is necessary to compute the integer parts of expressions such as a*Ln(b)/Ln(10) or a*Ln(10)/Ln(2), where a, b are short integers of order O(P). Such expressions are frequently needed to estimate the number of terms in the Taylor series or similar parameters of the algorithms. In these cases, it is important that the result is not underestimated. However, it would be wasteful to compute 1000*Ln(10)/Ln(2) in great precision only to discard most of that information by taking the integer part of that number. It is more efficient to approximate such constants from above by short rational numbers, for example, Ln(10)/Ln(2)<28738/8651 and Ln(2)<7050/10171. The error of such an approximation will be small enough for practical purposes. The function BracketRational can be used to find optimal rational approximations.
The function IntLog (see below) efficiently computes the integer part of a logarithm (for an integer base, not a natural logarithm). If more precision is desired in calculating Ln(a)/Ln(b) for integer a, b, one can compute IntLog(a^k,b) for some integer k and then divide by k.
The exponent E is easy to obtain:
Once we found E, we can write x=10^(E+m) where m=Exp(1000)/Ln(10)-E is a floating-point number, 0<m<1. Then M=10^m. To find M with P (decimal) digits, we need m with also at least P digits. Therefore, we actually need to evaluate Exp(1000)/Ln(10) with 434+P decimal digits before we can find P digits of the mantissa of x. We ran into a perhaps surprising situation: one needs a high-precision calculation even to find the first digit of x, because it is necessary to find the exponent E exactly as an integer, and E is a rather large integer. A normal double-precision numerical calculation would give an overflow error at this point.
Suppose we have found the number x=Exp(Exp(1000)) with some precision. What about finding Sin(x)? Now, this is extremely difficult, because to find even the first digit of Sin(x) we have to evaluate x with absolute error of at most 0.5. We know, however, that the number x has approximately 10^434 digits before the decimal point. Therefore, we would need to calculate x with at least that many digits. Computations with 10^434 digits is clearly far beyond the capability of modern computers. It seems unlikely that even the sign of Sin(Exp(Exp(1000))) will be obtained in the near future.
Suppose that N is the largest integer that our arbitrary-precision facility can reasonably handle. (For Yacas internal math library, N is about 10^10000.) Then it follows that numbers x of order 10^N can be calculated with at most one (1) digit of floating-point precision, while larger numbers cannot be calculated with any precision at all.
It seems that very large numbers can be obtained in practice only through exponentiation or powers. It is unlikely that such numbers will arise from sums or products of reasonably-sized numbers in some formula.
If numbers larger than 10^N are created only by exponentiation operations, then special exponential notation could be used to represent them. For example, a very large number z could be stored and manipulated as an unevaluated exponential z=Exp(M*10^E) where M>=1 is a floating-point number with P digits of mantissa and E is an integer, Ln(N)<E<N. Let us call such objects "exponentially large numbers" or "exp-numbers" for short.
In practice, we should decide on a threshold value N and promote a number to an exp-number when its logarithm exceeds N.
Note that an exp-number z might be positive or negative, e.g. z= -Exp(M*10^E).
Arithmetic operations can be applied to the exp-numbers. However, exp-large arithmetic is of limited use because of an almost certainly critical loss of precision. The power and logarithm operations can be meaningfully performed on exp-numbers z. For example, if z=Exp(M*10^E) and p is a normal floating-point number, then z^p=Exp(p*M*10^E) and Ln(z)=M*10^E. We can also multiply or divide two exp-numbers. But it makes no sense to multiply an exp-number z by a normal number because we cannot represent the difference between z and say 2.52*z. Similarly, adding z to anything else would result in a total underflow, since we do not actually know a single digit of the decimal representation of z. So if z1 and z2 are exp-numbers, then z1+z2 is simply equal to either z1 or z2 depending on which of them is larger.
We find that an exp-number z acts as an effective "infinity" compared with normal numbers. But exp-numbers cannot be used as a device for computing limits: the unavoidable underflow will almost certainly produce wrong results. For example, trying to verify
Taking a logarithm of an exp-number brings it back to the realm of normal, representable numbers. However, taking an exponential of an exp-number results in a number which is not representable even as an exp-number. This is because an exp-number z needs to have its exponent E represented exactly as an integer, but Exp(z) has an exponent of order O(z) which is not a representable number. The monstrous number Exp(z) could be only written as Exp(Exp(M*10^E)), i.e. as a "doubly exponentially large" number, or "2-exp-number" for short. Thus we obtain a hierarchy of iterated exp-numbers. Each layer is "unrepresentably larger" than the previous one.
The same considerations apply to very small numbers of the order 10^(-N) or smaller. Such numbers can be manipulated as "exponentially small numbers", i.e. expressions of the form Exp(-M*10^E) with floating-point mantissa M>=1 and integer E satisfying Ln(N)<E<N. Exponentially small numbers act as an effective zero compared with normal numbers.
Taking a logarithm of an exp-small number makes it again a normal representable number. However, taking an exponential of an exp-small number produces 1 because of underflow. To obtain a "doubly exponentially small" number, we need to take a reciprocal of a doubly exponentially large number, or take the exponent of an exponentially large negative power. In other words, Exp(-M*10^E) is exp-small, while Exp(-Exp(M*10^E)) is 2-exp-small.
The practical significance of exp-numbers is rather limited. We cannot obtain even a single significant digit of an exp-number. A "computation" with exp-numbers is essentially a floating-point computation with logarithms of these exp-numbers. A practical problem that needs numbers of this magnitude can probably be restated in terms of more manageable logarithms of such numbers. In practice, exp-numbers could be useful not as a means to get a numerical answer, but as a warning sign of critical overflow or underflow.
Usually one considers infinite continued fractions, i.e. the sequences a[i], b[i] are infinite. The value of an infinite continued fraction is defined as the limit of the fraction truncated after a very large number of terms. (A continued traction can be truncated after n-th term if one replaces b[n] by 0.)
An infinite continued fraction does not always converge. Convergence depends on the values of the terms.
The representation of a number via a continued fraction is not unique because we could, for example, multiply the numerator and the denominator of any simple fraction inside it by any number. Therefore one may consider some normalized representations. A continued fraction is called "regular" if b[k]=1 for all k, all a[k] are integer and a[k]>0 for k>=1. Regular continued fractions always converge.
The algorithm for converting a rational number r=n/m into a regular continued fraction is simple. First, we determine the integer part of r, which is Div(n,m). If it is negative, we need to subtract one, so that r=n[0]+x and the remainder x is nonnegative and less than 1. The remainder x=Mod(n,m)/m is then inverted, r[1]:=1/x=m/Mod(n,m) and so we have completed the first step in the decomposition, r=n[0]+1/r[1]; now n[0] is integer but r[1] is perhaps not integer. We repeat the same procedure on r[1], obtain the next integer term n[1] and the remainder r[2] and so on, until such n that r[n] is an integer and there is no more work to do. This process will always terminate.
If r is a real number which is known by its floating-point representation at some precision, then we can use the same algorithm because all floating-point values are actually rational numbers.
Real numbers known by their exact representations can sometimes be expressed as infinite continued fractions, for example
The functions GuessRational and NearRational take a real number x and use continued fractions to find rational approximations r=p/q<=>x with "optimal" (small) numerators and denominators p, q.
Suppose we know that a certain number x is rational but we have only a floating-point representation of x with a limited precision, for example, x<=>1.5662650602409638. We would like to guess a rational form for x (in this example x=130/83). The function GuessRational solves this problem.
Consider the following example. The number 17/3 has a continued fraction expansion {5,1,2}. Evaluated as a floating point number with limited precision, it may become something like 17/3+0.00001, where the small number represents a round-off error. The continued fraction expansion of this number is {5, 1, 2, 11110, 1, 5, 1, 3, 2777, 2}. The presence of an unnaturally large term 11110 clearly signifies the place where the floating-point error was introduced; all terms following it should be discarded to recover the continued fraction {5,1,2} and from it the initial number 17/3.
If a continued fraction for a number x is cut right before an unusually large term and evaluated, the resulting rational number has a small denominator and is very close to x. This works because partial continued fractions provide "optimal" rational approximations for the final (irrational) number, and because the magnitude of the terms of the partial fraction is related to the magnitude of the denominator of the resulting rational approximation.
GuessRational(x, prec) needs to choose the place where it should cut the continued fraction. The algorithm for this is somewhat heuristic but it works well enough. The idea is to cut the continued fraction when adding one more term would change the result by less than the specified precision. To realize this in practice, we need an estimate of how much a continued fraction changes when we add one term.
The routine GuessRational uses a (somewhat weak) upper bound for the difference of continued fractions that differ only by an additional last term:
The above estimate for delta hinges on the inequality
This algorithm works well if x is computed with enough precision; namely, it must be computed to at least as many digits as there are in the numerator and the denominator of the fraction combined. Also, the parameter prec should not be too large (or else the algorithm will find another rational number with a larger denominator that approximates x "better" than the precision to which you know x).
The related function NearRational(x, prec) works somewhat differently. The goal is to find an "optimal" rational number, i.e. with smallest numerator and denominator, that is within the distance 10^(-prec) of a given value x. The function NearRational does not always give the same answer as GuessRational.
The algorithm for NearRational comes from the HAKMEM [Beeler et al. 1972], Item 101C. Their description is terse but clear:
Problem: Given an interval, find in it the rational number with the smallest numerator and denominator. Solution: Express the endpoints as continued fractions. Find the first term where they differ and add 1 to the lesser term, unless it's last. Discard the terms to the right. What's left is the continued fraction for the "smallest" rational in the interval. (If one fraction terminates but matches the other as far as it goes, append an infinity and proceed as above.) |
The HAKMEM text [Beeler et al. 1972] contains several interesting insights relevant to continued fractions and other numerical algorithms.
In this section we describe some methods for computing general continued fractions and for estimating the number of terms needed to achieve a given precision.
Let us introduce some notation. A continued fraction
This method requires one long division at each step. There may be significant round-off error if a[m] and b[m] have opposite signs, but otherwise the round-off error is very small because a convergent continued fraction is not sensitive to small changes in its terms.
The idea is that the starting value of the backward recurrence should be chosen not as a[n] but as another number that more closely approximates the infinite remainder of the fraction. The infinite remainder, which we can symbolically write as F[n][Infinity], can be sometimes estimated analytically (obviously, we are unable to compute the remainder exactly). In simple cases, F[n][Infinity] changes very slowly at large n (warning: this is not always true and needs to be verified in each particular case!). Suppose that F[n][Infinity] is approximately constant; then it must be approximately equal to F[n+1][Infinity]. Therefore, if we solve the (quadratic) equation
We may use more terms of the original continued fraction starting from a[n] and obtain a more precise estimate of the remainder. In each case we will only have to solve one quadratic equation.
The "top-down" method is slower but provides an automatic error estimate and can be used to evaluate a continued fraction with more and more terms until the desired precision is achieved. The idea
The formula for f[k] is the following. First the auxiliary sequence P[k], Q[k] for k>=1 needs to be defined by P[1]=0, Q[1]=1, and P[k+1]:=b[k]*Q[k], Q[k+1]:=P[k]+a[k]*Q[k]. Then define f[0]:=a[0] and
Evaluating the next element f[k] requires four long multiplications and one division. This is significantly slower, compared with just one long division or two long multiplications in the bottom-up method. Therefore it is desirable to have an a priori estimate of the convergence rate and to be able to choose the number of terms before the computation. Below we shall consider some examples when the formula for f[k] allows to estimate the required number of terms analytically.
The bottom-up methods are simpler and faster than the top-down methods but require to know the number of terms in advance. In many cases the required number of terms can be estimated analytically, and then the bottom-up methods are always preferable. But in some cases the convergence analysis is very complicated.
The plain bottom-up method requires one long division at each step, while the bottom-up method with two recurrences requires two long multiplications at each step. Since the time needed for a long division is usually about four times that for a long multiplication (e.g. when the division is computed by Newton's method), the second variation of the bottom-up method is normally faster.
The estimate of the remainder improves the convergence of the bottom-up method and should always be used if available.
If an estimate of the number of terms is not possible, the top-down methods should be used, looping until the running error estimate shows enough precision. This incurs a performance penalty of at least 100% and at most 300%. The top-down method with two steps at once should be used only when the formula for f[k] results in alternating signs.
Usually, a continued fraction representation of a function will converge geometrically or slower, i.e. at least O(P) terms are needed for a precision of P digits. If a geometrically convergent Taylor series representation is also available, the continued fraction method will be slower because it requires at least as many or more long multiplications per term. Also, in most cases the Taylor series can be computed much more efficiently using the rectangular scheme. (See, e.g., the section on ArcTan(x) for a more detailed consideration.)
However, there are some functions for which a Taylor series is not easily computable or does not converge but a continued fraction is available. For example, the incomplete Gamma function and related functions can be computed using continued fractions in some domains of their arguments.
So far we have reduced the difference between F[m][n+1] and F[m][n] to a similar difference on the next level m+1 instead of m; i.e. we can increment m but keep n fixed. We can apply this formula to F[0][n+1]-F[0][n], i.e. for m=0, and continue applying the same recurrence relation until m reaches n. The result is
Now the problem is to simplify the two long products in the denominator. We notice that F[1][n] has F[2][n] in the denominator, and therefore F[1][n]*F[2][n]=F[2][n]*a[1]+b[1]. The next product is F[1][n]*F[2][n]*F[3][n] and it simplifies to a linear function of F[3][n], namely F[1][n]*F[2][n]*F[3][n] = (b[1]+a[1]*a[2])*F[3][n]+b[1]*a[2]. So we can see that there is a general formula
Having found the coefficients P[k], Q[k], we can now rewrite the long products in the denominator, e.g.
For example, the continued fraction
There are some cases when a continued fraction representation is efficient. The complementary error function Erfc(x) can be computed using the continued fraction due to Laplace (e.g. [Thacher 1963]),
The error function is a particular case of the incomplete Gamma function
Suppose we are given the terms a[k], b[k] that define an infinite continued fraction, and we need to estimate its convergence rate. We have to find the number of terms n for which the error of approximation is less than a given epsilon. In our notation, we need to solve Abs(f[n+1])<epsilon for n.
The formula that we derived for f[n+1] gives an error estimate for the continued fraction truncated at the n-th term. But this formula contains the numbers Q[n] in the denominator. The main problem is to find how quickly the sequence Q[n] grows. The recurrence relation for this sequence can be rewritten as
We have used this bound to estimate the relative error of the continued fraction expansion for ArcTan(x) at small x (elsewhere in this book). However, we found that at large x this bound becomes greater than 1. This does not mean that the continued fraction does not converge and cannot be used to compute ArcTan(x) when x>1, but merely indicates that the "simple bound" is too weak. The sequence Q[n] actually grows faster than the product of all a[k] and we need a tighter bound on this growth. In many cases such a bound can be obtained by the method of generating functions.
The asymptotic growth of the sequence Q[n] can be estimated by the method of steepest descent, also known as Laplace's method. (See, e.g., [Olver 1974], ch. 3, sec. 7.5.) This method is somewhat complicated but quite powerful. The method requires that we find an integral representation for Q[n] (usually a contour integral in the complex plane). Then we can convert the integral into an asymptotic series in k^(-1).
Along with the general presentation of the method, we shall consider an example when the convergence rate can be obtained analytically. The example is the representation of the complementary error function Erfc(x),
The "simple bound" would give Abs(f[n+1])<=v^n*n! and this expression grows with n. But we know that the above continued fraction actually converges for any v, so f[n+1] must tend to zero for large n. It seems that the "simple bound" is not strong enough for any v and we need a better bound.
An integral representation for Q[n] can be obtained using the method of generating functions. Consider a function G(s) defined by the infinite series
Note that the above series for the function G(s) may or may not converge for any given s; we shall manipulate G(s) as a formal power series until we obtain an explicit representation. What we really need is an analytic continuation of G(s) to the complex s.
It is generally the case that if we know a simple linear recurrence relation for a sequence, then we can also easily find its generating function. The generating function will satisfy a linear differential equation. To guess this equation, we write down the series for G(s) and its derivative G'(s) and try to find their linear combination which is identically zero because of the recurrence relation. (There is, of course, a computer algebra algorithm for doing this automatically.)
Taking the derivative G'(s) produces the forward-shifted series
In the case of our sequence Q[n] above, the recurrence relation is
The second step is to obtain an integral representation for Q[n], so that we could use the method of steepest descents and find its asymptotic at large n.
In our notation Q[n+1] is equal to the n-th derivative of the generating function at s=0:
There are two ways to proceed. One is to obtain an integral representation for G(s), for instance
The second possibility is to express Q[n] as a contour integral in the complex plane around s=0 in the counter-clockwise direction:
In the particular case of the continued fraction for Erfc(x), the calculations are somewhat easier if Re(v)>0 (where v:=1/(2*x^2)). Full details are given in a separate section. The result for Re(v)>0 is
Note that this is not merely a bound but an actual asymptotic estimate of f[n+1]. (Stirling's formula can also be derived using the method of steepest descent from an integral representation of the Gamma function, in a similar way.)
Defined as above, the value of f[n+1] is in general a complex number. The absolute value of f[n+1] can be found using the formula
When Re(v)<=0, the same formula can be used (this can be shown by a more careful consideration of the branches of the square roots). The continued fraction does not converge when Re(v)<0 and Im(v)=0 (i.e. for pure imaginary x). This can be seen from the above formula: in that case Re(v)= -Abs(v) and Abs(f[n+1]) does not decrease when n grows.
These estimates show that the error of the continued fraction approximation to Erfc(x) (when it converges) decreases with n slower than in a geometric progression. This means that we need to take O(P^2) terms to get P digits of precision.
To use the method of steepest descent, we represent the integrand as an exponential of some function g(t,n) and find "stationary points" where this function has local maxima:
We only need to consider very large values of n, so we can neglect terms of order O(1/Sqrt(n)) or smaller. We find that, in our case, two peaks of Re(g) occur at approximately t1<=> -1/2+Sqrt(n*v) and t2<=> -1/2-Sqrt(n*v). We assume that n is large enough so that n*v>1/2. Then the first peak is at a positive t and the second peak is at a negative t. The contribution of the peaks can be computed from the Taylor approximation of g(t,n) near the peaks. We can expand, for example,
Then we obtain the estimate
Usually one of the stationary points has the largest value of Re(g(s)); this is the dominant stationary point. If s0 is the dominant stationary point and g2=(Deriv(s,2)g(s0)) is the second derivative of g at that point, then the asymptotic of the integral is
We have to choose a new contour and check the convergence of the resulting integral separately. In each case we may need to isolate the singularities of G(s) or to find the regions of infinity where G(s) quickly decays (so that the infinite parts of the contour might be moved there). There is no prescription that works for all functions G(s).
Let us return to our example. For G(s)=Exp(s+(v*s^2)/2), the function g(s) has no singularities except the pole at s=0. There are two stationary points located at the (complex) roots s1, s2 of the quadratic equation v*s^2+s-n=0. Note that v is an arbitrary (nonzero) complex number. We now need to find which of the two stationary points gives the dominant contribution. By comparing Re(g(s1)) and Re(g(s2)) we find that the point s with the largest real part gives the dominant contribution. However, if Re(s1)=Re(s2) (this happens only if v is real and v<0, i.e. if x is pure imaginary), then both stationary points contribute equally. Barring that possibility, we find (with the usual definition of the complex square root) that the dominant contribution for large n is from the stationary point at
This formula agrees with the asymptotic for Q[n+1] found above for real v>0, when we use Stirling's formula for (n-1)!:
The treatment for Re(v)<0 is similar.
Newton's method sometimes suffers from a sensitivity to the initial guess. If the initial value x[0] is not chosen sufficiently close to the root, the iterations may converge very slowly or not converge at all. To remedy this, one can combine Newton's iteration with simple bisection. Once the root is bracketed inside an interval (a, b), one checks whether (a+b)/2 is a better approximation for the root than that obtained from Newton's iteration. This guarantees at least linear convergence in the worst case.
For some equations f(x)=0, Newton's method converges faster than quadratically. For example, solving Sin(x)=0 in the neighborhood of x=3.14159 gives "cubic" convergence, i.e. the number of correct digits is tripled at each step. This happens because Sin(x) near its root x=Pi has a vanishing second derivative and thus the function is particularly well approximated by a straight line.
Halley's method can be generalized to any function f(x). A cubically convergent iteration is always obtained if we replace the equation f(x)=0 by an equivalent equation
The Halley iteration for the equation f(x)=0 can be written as
Halley's iteration, despite its faster convergence rate, may be more cumbersome to evaluate than Newton's iteration and so it may not provide a more efficient numerical method for a given function. Only in some special cases is Halley's iteration just as simple to compute as Newton's iteration.
Halley's method is sometimes less sensitive to the choice of the initial point x[0]. An extreme example of sensitivity to the initial point is the equation x^(-2)=12 for which Newton's iteration x'=3/2*x-6*x^3 converges to the root only from initial points 0<x[0]<0.5 and wildly diverges otherwise, while Halley's iteration converges to the root from any x[0]>0.
It is at any rate not true that Halley's method always converges better than Newton's method. For instance, it diverges on the equation 2*Cos(x)=x unless started at x[0] within the interval (-1/6*Pi, 7/6*Pi). Another example is the equation Ln(x)=a. This equation allows to compute x=Exp(a) if a fast method for computing Ln(x) is available (e.g. the AGM-based method). For this equation, Newton's iteration
When it converges, Halley's iteration can still converge very slowly for certain functions f(x), for example, for f(x)=x^n-a if n^n>a. For such functions that have very large and rapidly changing derivatives, no general method can converge faster than linearly. In other words, a simple bisection will generally do just as well as any sophisticated iteration, until the root is approximated very precisely. Halley's iteration combined with bisection seems to be a good choice for such problems.
In the above examples, y is a small quantity and the series represents corrections to the initial value x, therefore the order of convergence is equal to the first discarded order of y in the series.
These simple constructions are possible because the functions satisfy simple identities, such as Exp(a+b)=Exp(a)*Exp(b) or Sqrt(a*b)=Sqrt(a)*Sqrt(b). For other functions the formulae quickly become very complicated and unsuitable for practical computations.
For practical evaluation, iterations must be supplemented with "quality control". For example, if x0 and x1 are two consecutive approximations that are already very close, we can quickly compute the achieved (relative) precision by finding the number of leading zeros in the number
Suppose x is an approximation that is correct to P digits; then we expect the quantity x' to be correct to 2*P digits. Therefore we should perform calculations in the first formula with 2*P digits; this means three long multiplications, 3*M(2*P). Now consider the calculation in the second formula. First, the quantity y:=1-a*x^2 is computed using two 2*P-digit multiplications.
The advantage is even greater with higher-order methods. For example, a fourth-order iteration for the inverse square root can be written as
The asymptotic cost of finding the root x of the equation f(x)=0 with P digits of precision is usually the same as the cost of computing f(x) [Brent 1975]. The main argument can be summarized by the following simple example. To get the result to P digits, we need O(Ln(P)) Newton's iterations. At each iteration we shall have to compute the function f(x) to a certain number of digits. Suppose that we start with one correct digit and that each iteration costs us c*M(2*P) operations where c is a given constant, while the number of correct digits grows from P to 2*P. Then the total cost of k iterations is
Thus, if we have a fast method of computing, say, ArcTan(x), then we immediately obtain a method of computing Tan(x) which is asymptotically as fast (up to a constant).
Increasing the order by 1 costs us comparatively little, and we may change the order k at any time. Is there a particular order k that gives the smallest computational cost and should be used for all iterations, or the order needs to be adjusted during the computation? A natural question is to find the optimal computational strategy.
It is difficult to fully analyze this question, but it seems that choosing a particular order k for all iterations is close to the optimal strategy.
A general "strategy" is a set of orders S(P,P[0])=(k[1], k[2], ..., k[n]) to be chosen at the first, second, ..., n-th iteration, given the initial precision P[0] and the required final precision P. At each iteration, the precision will be multiplied by the factor k[i]. The optimal strategy S(P,P[0]) is a certain function of P[0] and P such that the required precision is reached, i.e.
If we assume that the cost of multiplication M(P) is proportional to some power of P, for instance M(P)=P^mu, then the cost of each iteration and the total cost are homogeneous functions of P and P[0]. Therefore the optimal strategy is a function only of the ratio P/P[0]. We can multiply both P[0] and P by a constant factor and the optimal strategy will remain the same. We can denote the optimal strategy S(P/P[0]).
We can check whether it is better to use several iterations at smaller orders instead of one iteration at a large order. Suppose that M(P)=P^mu, the initial precision is 1 digit, and the final precision P=k^n. We can use either n iterations of the order k or 1 iteration of the order P. The cost of one iteration of order P at target precision P is C(P,P), whereas the total cost of n iterations of order k is
So far we have only considered strategies that use the same order k for all iterations, and we have not yet shown that such strategies are the best ones. We now give a plausible argument (not quite a rigorous proof) to justify this claim.
Consider the optimal strategy S(P^2) for the initial precision 1 and the final precision P^2, when P is very large. Since it is better to use several iterations at lower orders, we may assume that the strategy S(P^2) contains many iterations and that one of these iterations reaches precision P. Then the strategy S(P^2) is equivalent to a sequence of the two optimal strategies to go from 1 to P and from P to P^2. However, both strategies must be the same because the optimal strategy only depends on the ratio of precisions. Therefore, the optimal strategy S(P^2) is a sequence of two identical strategies (S(P), S(P)).
Suppose that k[1] is the first element of S(P). The optimal strategy to go from precision k[1] to precision P*k[1] is also S(P). Therefore the second element of S(P) is also equal to k[1], and by extension all elements of S(P) are the same.
A similar consideration gives the optimal strategy for other iterations that compute inverses of analytic functions, such as Newton's iteration for the inverse square root or for higher roots. The difference is that the value of c should be chosen as the equivalent number of multiplications needed to compute the function. For instance, c=1 for division and c=2 for the inverse square root iteration.
The conclusion is that in each case we should compute the optimal order k in advance and use this order for all iterations.
Divisions by large integers k! and separate evaluations of powers x^k are avoided if we store the previous term. The next term can be obtained by a short division of the previous term by k and a long multiplication by x. Then we only need O(N) long multiplications to evaluate the series. Usually the required number of terms N=O(P), so the total cost is O(P*M(P)).
There is no accumulation of round-off error in this method if x is small enough (in the case of Exp(x), a sufficient condition is Abs(x)<1/2). To see this, suppose that x is known to P digits (with relative error 10^(-P)). Since Abs(x)<1/2, the n-th term x^n/n! <4^(-n) (this is a rough estimate but it is enough). Since each multiplication by x results in adding 1 significant bit of relative round-off error, the relative error of x^n/n! is about 2^n times the relative error of x, i.e. 2^n*10^(-P). So the absolute round-off error of x^n/n! is not larger than
In practice, one could truncate the precision of x^n/n! as n grows, leaving a few guard bits each time to keep the round-off error negligibly small and yet to gain some computation speed. This however does not change the asymptotic complexity of the method---it remains O(P*M(P)).
However, if x is a small rational number, then the multiplication by x is short and takes O(P) operations. In that case, the total complexity of the method is O(P^2) which is always faster than O(P*M(P)).
If the coefficients a[k] are related by a simple ratio, then Horner's scheme may be modified to simplify the calculations. For example, the Horner scheme for the Taylor series for Exp(x) may be written as
Similarly to the simple summation method, the working precision for Horner's scheme may be adjusted to reduce the computation time: for example, x*a[N-1] needs to be computed with just a few significant digits if x is small. This does not change the asymptotic complexity of the method: it requires O(N)=O(P) long multiplications by x, so for general real x the complexity is again O(P*M(P)). However, if x is a small rational number, then the multiplication by x is short and takes O(P) operations. In that case, the total complexity of the method is O(P^2).
The "rectangular" algorithm uses 2*Sqrt(N) long multiplications (assuming that the coefficients of the series are short rational numbers) and Sqrt(N) units of storage. For high-precision floating-point x, this method provides a significant advantage over Horner's scheme.
Suppose we need to evaluate Sum(k,0,N,a[k]*x^k) and we know the number of terms N in advance. Suppose also that the coefficients a[k] are rational numbers with small numerators and denominators, so a multiplication a[k]*x is not a long multiplication (usually, either a[k] or the ratio a[k]/a[k-1] is a short rational number). Then we can organize the calculation in a rectangular array with c columns and r rows like this,
The total required number of long multiplications is r+c+Ln(r)-2. The minimum number of multiplications, given that r*c>=N, is around 2*Sqrt(N) at r<=>Sqrt(N)-1/2. Therefore, by arranging the Taylor series in a rectangle with sides r and c, we obtain an algorithm which costs O(Sqrt(N)) instead of O(N) long multiplications and requires Sqrt(N) units of storage.
One might wonder if we should not try to arrange the Taylor series in a cube or another multidimensional matrix instead of a rectangle. However, calculations show that this does not save time: the optimal arrangement is the two-dimensional rectangle.
The rectangular method saves the number of long multiplications by x but increases the number of short multiplications and additions. If x is a small integer or a small rational number, multiplications by x are fast and it does not make sense to use the rectangular method. Direct evaluation schemes are more efficient in that case.
Reducing the working precision saves some computation time. (We also need to estimate M but this can usually be done quickly by bit counting.) Instead of O(Sqrt(P)) long multiplications at precision P, we now need one long multiplication at precision P, another long multiplication at precision P-M, and so on. This technique will not change the asymptotic complexity which remains O(Sqrt(P)*M(P)), but it will reduce the constant factor in front of the O.
Like the previous two methods, there is no accumulated round-off error if x is small.
In the first case, it is better to use either Horner's scheme (for small P, slow multiplication) or the binary splitting technique (for large P, fast multiplication). The rectangular method is actually slower than Horner's scheme if x and the coefficients a[k] are small rational numbers. In the second case (when x is a floating-point number), it is better to use the "rectangular" algorithm.
In both cases we need to know the number of terms in advance, as we will have to repeat the whole calculation if a few more terms are needed. The simple summation method rarely gives an advantage over Horner's scheme, because it is almost always the case that one can easily compute the number of terms required for any target precision.
Note that if the argument x is not small, round-off error will become significant and needs to be considered separately for a given series.
For example, consider the Taylor series for Sin(x),
The above series expansions are asymptotic in the following sense: if we truncate the series and then take the limit of very large x, then the difference between the two sides of the equation goes to zero.
It is important that the series be first truncated and then the limit of large x be taken. Usually, an asymptotic series, if taken as an infinite series, does not actually converge for any finite x. This can be seen in the examples above. For instance, in the asymptotic series for Erfc(x) the n-th term has (2*n-1)!! in the numerator which grows faster than the n-th power of any number. The terms of the series decrease at first but then eventually start to grow, even if we select a large value of x.
The way to use an asymptotic series for a numerical calculation is to truncate the series well before the terms start to grow.
Error estimates of the asymptotic series are sometimes difficult, but the rule of thumb seems to be that the error of the approximation is usually not greater than the first discarded term of the series. This can be understood intuitively as follows. Suppose we truncate the asymptotic series at a point where the terms still decrease, safely before they start to grow. For example, let the terms around the 100-th term be A[100], A[101], A[102], ..., each of these numbers being significantly smaller than the previous one, and suppose we retain A[100] but drop the terms after it. Then our approximation would have been a lot better if we retained A[101] as well. (This step of the argument is really an assumption about the behavior of the series; it seems that this assumption is correct in many practically important cases.) Therefore the error of the approximation is approximately equal to A[101].
The inherent limitation of the method of asymptotic series is that for any given x, there will be a certain place in the series where the term has the minimum absolute value (after that, the series is unusable), and the error of the approximation cannot be smaller than that term.
For example, take the above asymptotic series for Erfc(x). The logarithm of the absolute value of the n-th term can be estimated using Stirling's formula for the factorial as
We find that for a given finite x, no matter how large, there is a maximum precision that can be achieved with the asymptotic series; if we need more precision, we have to use a different method.
However, sometimes the function we are evaluating allows identity transformations that relate f(x) to f(y) with y>x. For example, the Gamma function satisfies x*Gamma(x)=Gamma(x+1). In this case we can transform the function so that we would need to evaluate it at large enough x for the asymptotic series to give us enough precision.
More formally, one can define the function of two arguments AGM(x,y) as the limit of the sequence a[k] where a[k+1]=1/2*(a[k]+b[k]), b[k+1]=Sqrt(a[k]*b[k]), and the initial values are a[0]=x, b[0]=y. (The limit of the sequence b[k] is the same.) This function is obviously linear, AGM(c*x,c*y)=c*AGM(x,y), so in principle it is enough to compute AGM(1,x) or arbitrarily select c for convenience.
Gauss and Legendre knew that the limit of the AGM sequence is related to the complete elliptic integral,
The AGM sequence is also defined for complex values a, b. One needs to take a square root Sqrt(a*b), which requires a branch cut to be well-defined. Selecting the natural cut along the negative real semiaxis (Re(x)<0, Im(x)=0), we obtain an AGM sequence that converges for any initial values x, y with positive real part.
Let us estimate the convergence rate of the AGM sequence starting from x, y, following the paper [Brent 1975]. Clearly the worst case is when the numbers x and y are very different (one is much larger than another). In this case the numbers a[k], b[k] become approximately equal after about k=1/Ln(2)*Ln(Abs(Ln(x/y))) iterations (note: Brent's paper online mistypes this as 1/Ln(2)*Abs(Ln(x/y))). This is easy to see: if x is much larger than y, then at each step the ratio r:=x/y is transformed into r'=1/2*Sqrt(r). When the two numbers become roughly equal to each other, one needs about Ln(n)/Ln(2) more iterations to make the first n (decimal) digits of a[k] and b[k] coincide, because the relative error epsilon=1-b/a decays approximately as epsilon[k]<=>1/8*Exp(-2^k).
Unlike Newton's iteration, the AGM sequence does not correct errors, so all numbers need to be computed with full precision. Actually, slightly more precision is needed to compensate for accumulated round-off error. Brent (in [Brent 1975]) says that O(Ln(Ln(n))) bits of accuracy are lost to round-off error if there are total of n iterations.
The AGM sequence can be used for fast computations of Pi, Ln(x) and ArcTan(x). However, currently the limitations of Yacas internal math make these methods less efficient than simpler methods based on Taylor series and Newton iterations.
If we need to take O(P) terms of the series to obtain P digits of precision, then ordinary methods would require O(P^2) arithmetic operations. (Each term needs O(P) operations because all coefficients are rational numbers with O(P) digits and we need to perform a few short multiplications or divisions.) The binary splitting method requires O(M(P*Ln(P))*Ln(P)) operations instead of the O(P^2) operations. In other words, we need to perform long multiplications of integers of size O(P*Ln(P)) digits, but we need only O(Ln(P)) such multiplications. The binary splitting method performs better than the straightforward summation method if the cost of multiplication is lower than O(P^2)/Ln(P). This is usually true only for large enough precision (at least a thousand digits).
Thus there are two main limitations of the binary splitting method:
The main advantages of the method are:
For example, the Taylor series for ArcSin(x) (when x is a short rational number) is of this form:
The goal is to compute the sum S(0,N) with a chosen number of terms N. Instead of computing the rational number S directly, the binary splitting method propose to compute the following four integers P, Q, B, and T:
Thus the range [0, N) is split in half on each step. At the base of recursion the four integers P, Q, B, and T are computed directly. At the end of the calculation (top level of recursion), one floating-point division is performed to recover S=T/(B*Q). It is clear that the four integers carry the full information needed to continue the calculation with more terms. So this algorithm is easy to checkpoint and parallelize.
The integers P, Q, B, and T grow during the calculation to O(N*Ln(N)) bits, and we need to multiply these large integers. However, there are only O(Ln(N)) steps of recursion and therefore O(Ln(N)) long multiplications are needed. If the series converges linearly, we need N=O(P) terms to obtain P digits of precision. Therefore, the total asymptotic cost of the method is O(M(P*Ln(P))*Ln(P)) operations.
A more general form of the binary splitting technique is also given in [Haible et al. 1998]. The generalization applies to series for the form
The binary splitting technique can also be used for series with complex integer coefficients, or more generally for coefficients in any finite algebraic extension of integers, e.q. Z[ Sqrt(2)] (the ring of numbers of the form p+q*Sqrt(2) where p, q are integers). Thus we may compute the Bessel function J0(Sqrt(3)) using the binary splitting method and obtain exact intermediate results of the form p+q*Sqrt(3). But this will still not help compute J0(Pi). This is a genuine limitation of the binary splitting method.
We also assume that the power is positive, or else we need to perform an additional division to obtain x^(-y)=1/x^y.
If x!=0 is known to a relative precision epsilon, then x^y has the relative precision epsilon*y. This means a loss of precision if Abs(y)>1 and an improvement of precision otherwise.
The algorithm is based on the following trick: if n is even, say n=2*k, then x^n=x^k^2; and if n is odd, n=2*k+1, then x^n=x*x^k^2. Thus we can reduce the calculation of x^n to the calculation of x^k with k<=n/2, using at most two long multiplications. This reduction is one step of the algorithm; at each step n is reduced to at most half. This algorithm stops when n becomes 1, which happens after m steps where m is the number of bits in n. So the total number of long multiplications is at most 2*m=(2*Ln(n))/Ln(2). More precisely, it is equal to m plus the number of nonzero bits in the binary representation of n. On the average, we shall have 3/2*Ln(n)/Ln(2) long multiplications. The computational cost of the algorithm is therefore O(M(P)*Ln(n)). This should be compared with e.g. the cost of the best method for Ln(x) which is O(P*M(P)).
The outlined procedure is most easily implemented using recursive calls. The depth of recursion is of order Ln(n) and should be manageable for most real-life applications. The Yacas code would look like this:
10# power(_x,1)<--x; 20# power(_x,n_IsEven)<-- power(x,n>>1)^2; 30# power(_x,n_IsOdd)<--x*power(x,n>>1)^2; |
If we wanted to avoid recursion with its overhead, we would have to obtain the bits of the number n in reverse order. This is possible but is somewhat cumbersome unless we store the bits in an array.
It is easier to implement the non-recursive version of the squaring algorithm in a slightly different form. Suppose we obtain the bits b[i] of the number n in the usual order, so that n=b[0]+2*b[1]+...+b[m]*2^m. Then we can express the power x^n as
In the Yacas script form, the algorithm looks like this:
power(x_IsPositiveInteger,n_IsPositiveInteger)<-- [ Local(result, p); result:=1; p := x; While(n != 0) [ // at step k, p = x^(2^k) if (IsOdd(n)) result := result*p; p := p*p; n := n>>1; ]; result; ]; |
The same algorithm can be used to obtain a power of an integer modulo another integer, Mod(x^n,M), if we replace the multiplication p*p by a modular multiplication, such as p:=Mod(p*p,M). Since the remainder modulo m would be computed at each step, the results do not grow beyond M. This allows to efficiently compute even extremely large modular powers of integers.
Matrix multiplication, or, more generally, multiplication in any given ring, can be substituted into the algorithm instead of the normal multiplication. The function IntPowerNum encapsulates the computation of the n-th power of an expression using the binary squaring algorithm.
The squaring algorithm can be improved a little bit if we are willing to use recursion or to obtain the bits of n in the reverse order. (This was suggested in the exercise 4.21 in the book [von zur Gathen et al. 1999].) Let us represent the power n in base 4 instead of base 2. If q[k] are the digits of n in base 4, then we can express
We might then use the base 8 instead of 4 and obtain a further small improvement. (Using bases other than powers of 2 is less efficient.) But the small gain in speed probably does not justify the increased complexity of the algorithm.
An exceptional case is when n is a rational number with a very small numerator and denominator, for example, n=2/3. In this case it is faster to take the square of the cubic root of x. (See the section on the computation of roots below.) Then the case of negative x should be handled separately. This speedup is not implemented in Yacas.
Note that the relative precision changes when taking powers. If x is known to relative precision epsilon, i.e. x represents a real number that could be x*(1+epsilon), then x^2<=>x*(1+2*epsilon) has relative precision 2*epsilon, while Sqrt(x) has relative precision epsilon/2. So if we square a number x, we lose one significant bit of x, and when we take a square root of x, we gain one significant bit.
Note that the relative precision is improved after taking a root with n>1.
For integer N, the following steps are performed:
The intermediate results, u^2, v^2 and 2*u*v can be maintained easily too, due to the nature of the numbers involved ( v having only one bit set, and it being known which bit that is).
For floating point numbers, first the required number of decimals p after the decimal point is determined. Then the input number N is multiplied by a power of 10 until it has 2*p decimal. Then the integer square root calculation is performed, and the resulting number has p digits of precision.
Below is some Yacas script code to perform the calculation for integers.
//sqrt(1) = 1, sqrt(0) = 0 10 # BisectSqrt(0) <-- 0; 10 # BisectSqrt(1) <-- 1; 20 # BisectSqrt(N_IsPositiveInteger) <-- [ Local(l2,u,v,u2,v2,uv2,n); // Find highest set bit, l2 u := N; l2 := 0; While (u!=0) [ u:=u>>1; l2++; ]; l2--; // 1<<(l2/2) now would be a good under estimate // for the square root. 1<<(l2/2) is definitely // set in the result. Also it is the highest // set bit. l2 := l2>>1; // initialize u and u2 (u2==u^2). u := 1 << l2; u2 := u << l2; // Now for each lower bit: While( l2 != 0 ) [ l2--; // Get that bit in v, and v2 == v^2. v := 1<<l2; v2 := v<<l2; // uv2 == 2*u*v, where 2==1<<1, and // v==1<<l2, thus 2*u*v == // (1<<1)*u*(1<<l2) == u<<(l2+1) uv2 := u<<(l2 + 1); // n = (u+v)^2 = u^2 + 2*u*v + v^2 // = u2+uv2+v2 n := u2 + uv2 + v2; // if n (possible new best estimate for // sqrt(N)^2 is smaller than N, then the // bit l2 is set in the result, and // add v to u. if( n <= N ) [ u := u+v; // u <- u+v u2 := n; // u^2 <- u^2 + 2*u*v + v^2 ]; l2--; ]; u; // return result, accumulated in u. ]; |
The bisection algorithm uses only additions and bit shifting operations. Suppose the integer N has P decimal digits, then it has n=P*Ln(10)/Ln(2) bits. For each bit, the number of additions is about 4. Since the cost of an addition is linear in the number of bits, the total complexity of the bisection method is roughly 4*n^2=O(P^2).
In most implementations of arbitrary-precision arithmetic, the time to perform a long division is several times that of a long multiplication. Therefore it makes sense to use a method that avoids divisions. One variant of Newton's method is to solve the equation 1/r^2=x. The solution of this equation r=1/Sqrt(x) is the limit of the iteration
As usual with Newton's method, all errors are automatically corrected, so the working precision can be gradually increased until the last iteration. The full precision of P digits is used only at the last iteration; the last-but-one iteration uses P/2 digits and so on.
An optimization trick is to combine the multiplication by x with the last iteration. Then computations can be organized in a special way to avoid the last full-precision multiplication. (This is described in [Karp et al. 1997] where the same trick is also applied to Newton's iteration for division.)
The idea is the following: let r be the P-digit approximation to 1/Sqrt(x) at the beginning of the last iteration. (In this notation, 2*P is the precision of the final result, so x is also known to about 2*P digits.) The unmodified procedure would have run as follows:
Now consider Newton's iteration for s<=>Sqrt(x),
Consider the cost of the last iteration of this combined method. First, we compute s=x*r, but since we only need P correct digits of s, we can use only P digits of x, so this costs us M(P). Then we compute s^2*x which, as before, costs M(P)+M(2*P), and then we compute r*(1-s^2*x) which costs only M(P). The total cost is therefore 3*M(P)+M(2*P), so we have traded one multiplication with 2*P digits for one multiplication with P digits. Since the time of the last iteration dominates the total computing time, this is a significant cost savings. For example, if the multiplication is quadratic, M(P)=O(P^2), then this saves about 30% of total execution time; for linear multiplication, the savings is about 16.67%.
These optimizations do not change the asymptotic complexity of the method, although they do reduce the constant in front of O().
Suppose we need to find Sqrt(x). Choose an integer n such that 1/4<x':=4^(-n)*x<=1. The value of n is easily found from bit counting: if b is the bit count of x, then
To compute Sqrt(x'), we use Newton's method with the initial value x'[0] obtained by interpolation of the function Sqrt(x) on the interval [1/4, 1]. A suitable interpolation function might be taken as simply (2*x+1)/3 or more precisely
This may save a few iterations, at the small expense of evaluating the interpolation function once at the beginning. However, in computing with high precision the initial iterations are very fast and this argument reduction does not give a significant speed gain. But the gain may be important at low precisions, and this technique is sometimes used in microprocessors.
Since we only need the integer part of the root, it is enough to use integer division in the Halley iteration. The sequence x[k] will monotonically approximate the number n^(1/s) from below if we start from an initial guess that is less than the exact value. (We start from below so that we have to deal with smaller integers rather than with larger integers.) If n=p^s, then after enough iterations the floating-point value of x[k] would be slightly less than p; our value is the integer part of x[k]. Therefore, at each step we check whether 1+x[k] is a solution of x^s=n, in which case we are done; and we also check whether (1+x[k])^s>n, in which case the integer part of the root is x[k]. To speed up the Halley iteration in the worst case when s^s>n, it is combined with bisection. The root bracket interval x1<x<x2 is maintained and the next iteration x[k+1] is assigned to the midpoint of the interval if Halley's formula does not give sufficiently rapid convergence. The initial root bracket interval can be taken as x[0], 2*x[0].
If s is very large ( s^s>n), the convergence of both Newton's and Halley's iterations is almost linear until the final few iterations. Therefore it is faster to evaluate the floating-point power for large b using the exponential and the logarithm.
The trick of combining the last iteration with the final multiplication by x can be also used with all higher-order schemes.
Consider the cost of one iteration of n-th order. Let the initial precision of r be P; then the final precision is k*P and we use up to n*P digits of x. First we compute y:=1-r^2*x to P*(n-1) digits, this costs M(P) for r^2 and then M(P*n) for r^2*x. The value of y is of order 10^(-P) and it has P*(n-1) digits, so we only need to use that many digits to multiply it by r, and r*y now costs us M(P*(n-1)). To compute y^k (here 2<=k<=n-1), we need M(P*(n-k)) digits of y; since we need all consecutive powers of y, it is best to compute the powers one after another, lowering the precision on the way. The cost of computing r*y^k*y after having computed r*y^k is therefore M(P*(n-k-1)). The total cost of the iteration comes to
From the general considerations in the previous chapter (see the section on Newton's method) it follows that the optimal order is n=2 and that higher-order schemes are slower in this case.
Newton's method (2) is best for all other cases: large precision and/or roots other than square roots.
Logarithms of complex numbers can be reduced to elementary functions of real numbers, for example:
The basic algorithm consists of (integer-) dividing x by b repeatedly until x becomes 0 and counting the necessary number of divisions. If x has P digits and b and P are small numbers, then division is linear in P and the total number of divisions is O(P). Therefore this algorithm costs O(P^2) operations.
A speed-up for large x is achieved by first comparing x with b, then with b^2, b^4, etc., without performing any divisions. We perform n such steps until the factor b^2^n is larger than x. At this point, x is divided by the previous power of b and the remaining value is iteratively compared with and divided by successively smaller powers of b. The number of squarings needed to compute b^2^n is logarithmic in P. However, the last few of these multiplications are long multiplications with numbers of length P/4, P/2, P digits. These multiplications take the time O(M(P)). Then we need to perform another long division and a series of progressively shorter divisions. The total cost is still O(M(P)). For large P, the cost of multiplication M(P) is less than O(P^2) and therefore this method is preferable.
There is one special case, the binary (base 2) logarithm. Since the internal representation of floating-point numbers is usually in binary, the integer part of the binary logarithm can be usually implemented as a constant-time operation.
The logarithm satisfies Ln(1/x)= -Ln(x). Therefore we need to consider only x>1, or alternatively, only 0<x<1.
Note that the relative precision for x translates into absolute precision for Ln(x). This is because Ln(x*(1+epsilon))<=>Ln(x)+epsilon for small epsilon. Therefore, the relative precision of the result is at best epsilon/Ln(x). So to obtain P decimal digits of Ln(x), we need to know P-Ln(Abs(Ln(x)))/Ln(10) digits of x. This is better than the relative precision of x if x>e but worse if x<=>1.
If x>1, then we can compute -Ln(1/x) instead of Ln(x). However, the series converges very slowly if x is close to 0 or to 2.
Here is an estimate of the necessary number of terms to achieve a (relative) precision of P decimal digits when computing Ln(1+x) for small real x. Suppose that x is of order 10^(-N), where N>=1. The error after keeping n terms is not greater than the first discarded term, x^(n+1)/(n+1). The magnitude of the sum is approximately x, so the relative error is x^n/(n+1) and this should be smaller than 10^(-P). We obtain a sufficient condition n>P/N.
All calculations need to be performed with P digits of precision. The "rectangular" scheme for evaluating n terms of the Taylor series needs about 2*Sqrt(n) long multiplications. Therefore the cost of this calculation is 2*Sqrt(P/N)*M(P).
When P is very large (so that a fast multiplication can be used) and x is a small rational number, then the binary splitting technique can be used to compute the Taylor series. In this case the cost is O(M(P)*Ln(P)).
Note that we need to know P+N digits of 1+x to be able to extract P digits of Ln(1+x). The N extra digits will be lost when we subtract 1 from 1+x.
One way is to take several square roots, reducing x to x^2^(-k) until x becomes close to 1. Then we can compute Ln(x^2^(-k)) using the Taylor series and use the identity Ln(x)=2^k*Ln(x^2^(-k)).
The number of times to take the square root can be chosen to minimize the total computational cost. Each square root operation takes the time equivalent to a fixed number c of long multiplications. (According to the estimate of [Brent 1975], c<=>13/2.) Suppose x is initially of order 10^L where L>0. Then we can take the square root k[1] times and reduce x to about 1.33. Here we can take k[1]<=>Ln(L)/Ln(2)+3. After that, we can take the square root k[2] times and reduce x to 1+10^(-N) with N>=1. For this we need k[2]<=>1+N*Ln(10)/Ln(2) square roots. The cost of all square roots is c*(k[1]+k[2]) long multiplications. Now we can use the Taylor series and obtain Ln(x^2^(-k[1]-k[2])) in 2*Sqrt(P/N) multiplications. We can choose N to minimize the total cost for a given L.
The initial value for x can be found by bit counting on the number a. If m is the "bit count" of a, i.e. m is an integer such that 1/2<=a*2^(-m)<1, then the first approximation to Ln(a) is m*Ln(2). (Here we can use a very rough approximation to Ln(2), for example, 2/3.)
The initial value found in this fashion will be correct to about one bit. The number of digits triples at each Halley iteration, so the result will have about 3*k correct bits after k iterations (this disregards round-off error). Therefore the required number of iterations for P decimal digits is 1/Ln(3)*Ln(P*Ln(2)/Ln(10)).
This method is currently faster than other methods (with internal math) and so it is implemented in the routine LnNum.
This method can be generalized to higher orders. Let y:=1-a*Exp(-x[0]), where x[0] is a good approximation to Ln(a) so y is small. Then Ln(a)=x[0]+Ln(1-y) and we can expand in y to obtain
The optimal number of terms to take depends on the speed of the implementation of Exp(x).
The required number of AGM iterations is approximately 2*Ln(P)/Ln(2). For smaller values of x (but x>1), one can either raise x to a large integer power r and then compute 1/r*Ln(x^r) (this is quick only if x is itself an integer or a rational), or multiply x by a large integer power of 2 and compute Ln(2^s*x)-s*Ln(2) (this is better for floating-point x). Here the required powers are
If x<1, then (-Ln(1/x)) is computed.
Finally, there is a special case when x is very close to 1, where the Taylor series converges quickly but the AGM algorithm requires to multiply x by a large power of 2 and then subtract two almost equal numbers, leading to a great waste of precision. Suppose 1<x<1+10^(-M), where M is large (say of order P). The Taylor series for Ln(1+epsilon) needs about N= -P*Ln(10)/Ln(epsilon)=P/M terms. If we evaluate the Taylor series using the rectangular scheme, we need 2*Sqrt(N) multiplications and Sqrt(N) units of storage. On the other hand, the main slow operation for the AGM sequence is the geometric mean Sqrt(a*b). If Sqrt(a*b) takes an equivalent of c multiplications (Brent's estimate is c=13/2 but it may be more in practice), then the AGM sequence requires 2*c*Ln(P)/Ln(2) multiplications. Therefore the Taylor series method is more efficient for
For larger x>1+10^(-M), the AGM method is more efficient. It is necessary to increase the working precision to P+M*Ln(2)/Ln(10) but this does not decrease the asymptotic speed of the algorithm. To compute Ln(x) with P digits of precision for any x, only O(Ln(P)) long multiplications are required.
The simplest version is this: for integer m, we have the identity Ln(x)=m+Ln(x*e^(-m)). Assuming that e:=Exp(1) is precomputed, we can find the smallest integer m for which x<=e^m by computing the integer powers of e and comparing with x. (If x is large, we do not really have to go through all integer m: instead we can estimate m by bit counting on x and start from e^m.) Once we found m, we can use the Taylor series on 1-delta:=x*e^(-m) since we have found the smallest possible m, so 0<=delta<1-1/e.
A refinement of this method requires to precompute b=Exp(2^(-k)) for some fixed integer k>=1. (This can be done efficiently using the squaring trick for the exponentials.) First we find the smallest power m of b which is above x. To do this, we compute successive powers of b and find the first integer m such that x<=b^m=Exp(m*2^(-k)). When we find such m, we define 1-delta:=x*b^(-m) and then delta will be small, because 0<delta<1-1/b<=>2^(-k) (the latter approximation is good if k is large). We compute Ln(1-delta) using the Taylor series and finally find Ln(x)=m*2^k+Ln(1-delta).
For smaller delta, the Taylor series of Ln(1-delta) is more efficient. Therefore, we have a trade-off between having to perform more multiplications to find m, and having a faster convergence of the Taylor series.
This series converges for all z such that Re(a+z)>0 if a>0. The convergence rate is, however, the same as for the original Taylor series. In other words, it converges slowly unless z/(2*a+z) is small. The parameter a can be chosen to optimize the convergence; however, Ln(a) should be either precomputed or easily computable for this method to be efficient.
For instance, if x>1, we can choose a=2^k for an integer k>=1, such that 2^(k-1)<=x<2^k=a. (In other words, k is the bit count of x.) In that case, we represent x=a-z and we find that the expansion parameter z/(2*a-z)<1/3. So a certain rate of convergence is guaranteed, and it is enough to take a fixed number of terms, about P*Ln(10)/Ln(3), to obtain P decimal digits of Ln(x) for any x. (We should also precompute Ln(2) for this scheme to work.)
If 0<x<1, we can compute -Ln(1/x).
This method works robustly but is slower than the Taylor series with some kind of argument reduction. With the "rectangular" method of summation, the total cost is O(Sqrt(P)*M(P)).
The method shall compute Ln(1+x) for real x such that Abs(x)<1/2. For other x, some sort of argument reduction needs to be applied. (So this method is a replacement for the Taylor series that is asymptotically faster at very high precision.)
The main idea is to use the property
More formally, we can write the method as a loop over k, starting with k=1 and stopping when 2^(-k)<10^(-P) is below the required precision. At the beginning of the loop we have y=0, z=x, k=1 and Abs(z)<1/2. The loop invariants are (1+z)*Exp(y) which is always equal to the original number 1+x, and the condition Abs(z)<2^(-k). If we construct this loop, then it is clear that at the end of the loop 1+z will become 1 to required precision and therefore y will be equal to Ln(1+x).
The body of the loop consists of the following steps:
The total number of steps in the loop is at most Ln(P*Ln(10)/Ln(2))/Ln(2). Each step requires O(M(P)*Ln(P)) operations because the exponential Exp(-f) is taken at a rational arguments f and can be computed using the binary splitting technique. (Toward the end of the loop, the number of significant digits of f grows, but the number of digits we need to obtain is decreased. At the last iteration, f contains about half of the digits of x but computing Exp(-f) requires only one term of the Taylor series.) Therefore the total cost is O(M(P)*Ln(P)^2).
Essentially the same method can be used to evaluate a complex logarithm, Ln(a+I*b). It is slower but the asymptotic cost is the same.
This method does not seem to provide a computational advantage compared with the other methods.
First, we need to divide x by a certain power of 2 to reduce x to y in the interval 1<=y<2. We can use the bit count m=BitCount(x) to find an integer m such that 1/2<=x*2^(-m)<1 and take y=x*2^(1-m). Then Ln(x)/Ln(2)=Ln(y)/Ln(2)+m-1.
Now we shall find the bits in the binary representation of Ln(y)/Ln(2), one by one. Given a real y such that 1<=y<2, the value Ln(y)/Ln(2) is between 0 and 1. Now,
The process is finished either when the required number of bits of Ln(y)/Ln(2) is found, or when the precision of the argument is exhausted, whichever occurs first. Note that each iteration requires a long multiplication (squaring) of a number, and each squaring loses 1 bit of relative precision, so after k iterations the number of precise bits of y would be P-k. Therefore we cannot have more iterations than P (the number of precise bits in the original value of x). The total cost is O(P*M(P)).
The squaring at each iteration needs to be performed not with all digits, but with the number of precise digits left in the current value of y. This does not reduce the asymptotic complexity; it remains O(P*M(P)).
Comparing this method with the Taylor series, we find that the only advantage of this method is simplicity. The Taylor series requires about P terms, with one long multiplication and one short division per term, while the bisection method does not need any short divisions. However, the rectangular method of Taylor summation cuts the time down to O(Sqrt(P)) long multiplications, at a cost of some storage and bookkeeping overhead. Therefore, the bisection method may give an advantage only at very low precisions. (This is why it is sometimes used in microprocessors.) The similar method for the exponential function requires a square root at every iteration and is never competitive with the Taylor series.
The exponential function satisfies Exp(-x)=1/Exp(x). Therefore we need to consider only x>0.
Note that the absolute precision for x translates into relative precision for Exp(x). This is because Exp(x+epsilon)<=>Exp(x)*(1+epsilon) for small epsilon. Therefore, to obtain P decimal digits of Exp(x) we need to know x with absolute precision of at least 10^(-P), that is, we need to know P+Ln(Abs(x))/Ln(10) digits of x. Thus, the relative precision becomes worse after taking the exponential if x>1 but improves if x is very small.
If x is sufficiently small, e.g. Abs(x)<10^(-M) and M>Ln(P)/Ln(10), then it is enough to take about P/M terms in the Taylor series. If x is of order 1, one needs about P*Ln(10)/Ln(P) terms.
If x=p/q is a small rational number, and if a fast multiplication is available, then the binary splitting technique should be used to evaluate the Taylor series. The computational cost of that is O(M(P*Ln(P))*Ln(P)).
A modification of the squaring reduction allows to significantly reduce the round-off error [Brent 1978]. Instead of Exp(x)=Exp(x/2)^2, we use the identity
Newton's method gives the iteration
A cubically convergent formula is obtained if we replace Ln(x)=a by an equivalent equation
This cubically convergent iteration seems to follow from a good equivalent equation that we guessed. But it turns out that it can be generalized to higher orders. Let y:=a-Ln(x[0]) where x[0] is an approximation to Exp(a); if it is a good approximation, then y is small. Then Exp(a)=x[0]*Exp(y). Expanding in y, we obtain
The optimal number of terms to take depends on the speed of the implementation of Ln(x).
A refinement of this method is to subtract not only the integer part of x, but also the first few binary digits. We fix an integer k>=1 and precompute b:=Exp(2^(-k)). Then we find the integer m such that 0<=x-m*2^(-k)<2^(-k). (The rational number m*2^(-k) contains the integer part of x and the first k bits of x after the binary point.) Then we compute Exp(x-m*2^(-k)) using the Taylor series and Exp(m*2^(-k))=b^m by the integer powering algorithm from the precomputed value of b.
The parameter k should be chosen to minimize the computational effort.
Take the binary decomposition of x of the following form,
The cost of this method is O(M(P*Ln(P))*Ln(P)) operations.
Essentially the same method can be used to compute the complex exponential, Exp(a+I*b). This is slower but the asymptotic cost is the same.
This method does not seem to provide a computational advantage compared with the other methods.
Efficient iterative algorithms for computing pi with arbitrary precision have been recently developed by Brent, Salamin, Borwein and others. However, limitations of the current multiple-precision implementation in Yacas (compiled with the "internal" math option) make these advanced algorithms run slower because they require many more arbitrary-precision multiplications at each iteration.
The file examples/pi.ys implements several different algorithms that duplicate the functionality of Pi(). See [Gourdon et al. 2001] for more details of computations of pi and generalizations of Newton-Raphson iteration.
Since pi is a solution of Sin(x)=0, one may start sufficiently close, e.g. at x0=3.14159265 and iterate x'=x-Tan(x). In fact it is faster to iterate x'=x+Sin(x) which solves a different equation for pi. PiMethod0() is the straightforward implementation of the latter iteration. A significant speed improvement is achieved by doing calculations at each iteration only with the precision of the root that we expect to get from that iteration. Any imprecision introduced by round-off will be automatically corrected at the next iteration.
If at some iteration x=pi+epsilon for small epsilon, then from the Taylor expansion of Sin(x) it follows that the value x' at the next iteration will differ from pi by O(epsilon^3). Therefore, the number of correct digits triples at each iteration. If we know the number of correct digits of pi in the initial approximation, we can decide in advance how many iterations to compute and what precision to use at each iteration.
The final speed-up in PiMethod0() is to avoid computing at unnecessarily high precision. This may happen if, for example, we need to evaluate 200 digits of pi starting with 20 correct digits. After 2 iterations we would be calculating with 180 digits; the next iteration would have given us 540 digits but we only need 200, so the third iteration would be wasteful. This can be avoided by first computing pi to just over 1/3 of the required precision, i.e. to 67 digits, and then executing the last iteration at full 200 digits. There is still a wasteful step when we would go from 60 digits to 67, but much less time would be wasted than in the calculation with 200 digits of precision.
Newton's method is based on approximating the function f(x) by a straight line. One can achieve better approximation and therefore faster convergence to the root if one approximates the function with a polynomial curve of higher order. The routine PiMethod1() uses the iteration
Both PiMethod0() and PiMethod1() require a computation of Sin(x) at every iteration. An industrial-strength arbitrary precision library such as gmp can multiply numbers much faster than it can evaluate a trigonometric function. Therefore, it would be good to have a method which does not require trigonometrics. PiMethod2() is a simple attempt to remedy the problem. It computes the Taylor series for ArcTan(x),
While(Not enough precision) [ |
]; |
At each iteration, the variable pi will have twice as many correct digits as it had at the previous iteration.
To obtain the value of Pi with P decimal digits, one needs to take
If this series is evaluated using Horner's scheme (the routine PiChudnovsky), then about Ln(n)/Ln(10) extra digits are needed to compensate for round-off error while adding n terms. This method does not require any long multiplications and costs O(P^2) operations.
A potentially much faster way to evaluate this series at high precision is by using the binary splitting technique. This would give the asymptotic cost O(M(P*Ln(P))*Ln(P)).
Tangent is computed by dividing Sin(x)/Cos(x) or from Sin(x) using the identity
For Cos(x), the bisection identity can be used more efficiently if it is written as
For Sin(x), the trisection identity is
The optimal number of bisections or trisections should be estimated to reduce the total computational cost. The resulting number will depend on the magnitude of the argument x, on the required precision P, and on the speed of the available multiplication M(P).
Alternatively, ArcSin(x) may be found from the Taylor series and inverted to obtain Sin(x).
This method seems to be of marginal value since efficient direct methods for Cos(x), Sin(x) are available.
By the identity ArcCos(x):=Pi/2-ArcSin(x), the inverse cosine is reduced to the inverse sine. Newton's method for ArcSin(x) consists of solving the equation Sin(y)=x for y. Implementation is similar to the calculation of pi in PiMethod0().
For x close to 1, Newton's method for ArcSin(x) converges very slowly. An identity
Inverse tangent can also be related to inverse sine by
Alternatively, the Taylor series can be used for the inverse sine:
An everywhere convergent continued fraction can be used for the tangent:
Hyperbolic and inverse hyperbolic functions are reduced to exponentials and logarithms: Cosh(x)=1/2*(Exp(x)+Exp(-x)), Sinh(x)=1/2*(Exp(x)-Exp(-x)), Tanh(x)=Sinh(x)/Cosh(x),
The convergence of the continued fraction expansion of ArcTan(x) is indeed better than convergence of the Taylor series. Namely, the Taylor series converges only for Abs(x)<1 while the continued fraction converges for all x. However, the speed of its convergence is not uniform in x; the larger the value of x, the slower the convergence. The necessary number of terms of the continued fraction is in any case proportional to the required number of digits of precision, but the constant of proportionality depends on x.
This can be understood by the following argument. The difference between two partial continued fractions that differ only by one extra last term can be estimated as
If we compare the rate of convergence of the continued fraction for ArcTan(x) with the Taylor series
The "double factorial" n!! :=n*(n-2)*(n-4)*... is also useful for some calculations. For convenience, one defines 0! :=1, 0!! :=1, and (-1)!! :=1; with these definitions, the recurrence relations
There are two tasks related to the factorial: the exact integer calculation and an approximate calculation to some floating-point precision. Factorial of n has approximately n*Ln(n)/Ln(10) decimal digits, so an exact calculation is practical only for relatively small n. In the current implementation, exact factorials for n>65535 are not computed but print an error message advising the user to avoid exact computations of large factorials. For example, LnGammaNum(n+1) is able to compute Ln(n!) for very large n to any desired floating-point precision.
A second method uses a binary tree arrangement of the numbers 1, 2, ..., n similar to the recursive sorting routine ("merge-sort"). If we denote by a *** b the "partial factorial" product a*(a+1)*...(b-1)*b, then the tree-factorial algorithm consists of replacing n! by 1***n and recursively evaluating (1***m)*((m+1)***n) for some integer m near n/2. The partial factorials of nearby numbers such as m***(m+2) are evaluated explicitly. The binary tree algorithm requires one multiplication of P/2 digit integers at the last step, two P/4 digit multiplications at the last-but-one step and so on. There are O(Ln(n)) total steps of the recursion. If the cost of multiplication is M(P)=P^(1+a)*Ln(P)^b, then one can show that the total cost of the binary tree algorithm is O(M(P)) if a>0 and O(M(P)*Ln(n)) if a=0 (which is the best asymptotic multiplication algorithm).
Therefore, the tree method wins over the simple method if the cost of multiplication is lower than quadratic.
The tree method can also be used to compute "double factorials" ( n!!). This is faster than to use the identities
Binomial coefficients Bin(n,m) are found by first selecting the smaller of m, n-m and using the identity Bin(n,m)=Bin(n,n-m). Then a partial factorial is used to compute
In principle, one could choose any (non-negative) weight function rho(x) and any interval [a, b] and construct the corresponding family of orthogonal polynomials q _n(x). For example, take q _0=1, then take q _1=x+c with unknown c and find such c that q _0 and q _1 satisfy the orthogonality condition; this requires solving a linear equation. Then we can similarly find two unknown coefficients of q _2 and so on. (This is called the Gramm-Schmidt orthogonalization procedure.)
But of course not all weight functions rho(x) and not all intervals [a, b] are equally interesting. There are several "classical" families of orthogonal polynomials that have been of use to theoretical and applied science. The "classical" polynomials are always solutions of a simple second-order differential equation and are always a specific case of some hypergeometric function.
The construction of "classical" polynomials can be described by the following scheme. The function rho(x) must satisfy the so-called Pearson's equation,
If the function rho(x) and the interval [a, b] are chosen in this way, then the corresponding orthogonal polynomials q _n(x) are solutions of the differential equation
Finally, there is a formula for the generating function of the polynomials,
The classical families of (normalized) orthogonal polynomials are obtained in this framework with the following definitions:
The Rodrigues formula or the generating function are not efficient ways to calculate the polynomials. A better way is to use linear recurrence relations connecting q[n+1] with q[n] and q[n-1]. (These recurrence relations can also be written out in full generality through alpha(x) and beta(x) but we shall save the space.)
There are three computational tasks related to orthogonal polynomials:
In the next section we shall give some formulae that allow to calculate particular polynomials more efficiently.
There is a way to implement this method without recursion. The idea is to build the sequence of numbers n[1], n[2], ... that are needed to compute OrthoT(n,x).
For example, to compute OrthoT(19,x) using the second recurrence relation, we need OrthoT(10,x) and OrthoT(9,x). We can write this chain symbolically as 19<>c(9,10). For OrthoT(10,x) we need only OrthoT(5,x). This we can write as 10<>c(5). Similarly we find: 9<>c(4,5). Therefore, we can find both OrthoT(9,x) and OrthoT(10,x) if we know OrthoT(4,x) and OrthoT(5,x). Eventually we find the following chain of pairs:
There are about 2*Ln(n)/Ln(2) elements in the chain that leads to the number n. We can generate this chain in a straightforward way by examining the bits in the binary representation of n. Therefore, we find that this method requires no storage and time logarithmic in n. A recursive routine would also take logarithmic time but require logarithmic storage space.
Note that using these recurrence relations we do not obtain any individual coefficients of the Chebyshev polynomials. This method does not seem very useful for symbolic calculations (with symbolic x), because the resulting expressions are rather complicated combinations of nested products. It is difficult to expand such an expression into powers of x or manipulate it in any other way, except compute a numerical value. However, these fast recurrences are numerically unstable, so numerical values need to be evaluated with extended working precision. Currently this method is not used in Yacas, despite its speed.
An alternative method for very large n would be to use the identities
Coefficients for Legendre, Hermite, Laguerre, Chebyshev polynomials can be obtained by explicit formulae. This is faster than using recurrences if we need the entire polynomial symbolically, but still slower than the recurrences for numerical calculations.
In all formulae for the coefficients, there is no need to compute factorials every time: the next coefficient can be obtained from the previous one by a few short multiplications and divisions. Therefore this computation costs O(n^2) short operations.
Suppose a family of functions q _n(x), n=0, 1, ... satisfies known recurrence relations of the form
The procedure goes as follows [Luke 1975]. First, for convenience, we define q[-1]:=0 and the coefficient A _1(x) so that q[1]=A[1]*q[0]. This allows us to use the above recurrence relation formally also at n=1. Then, we take the array of coefficients f[n] and define a backward recurrence relation
The book [Luke 1975] warns that the recurrence relation for X[n] is not always numerically stable.
Note that in the book there seems to be some confusion as to how the coefficient A[1] is defined. (It is not defined explicitly there.) Our final formula differs from the formula in [Luke 1975] for this reason.
The Clenshaw-Smith procedure is analogous to the Horner scheme of calculating polynomials. This procedure can also be generalized for linear recurrence relations having more than two terms. The functions q _0(x), q _1(x), A _n(x), and B _n(x) do not actually have to be polynomials for this to work.
The Gamma function is implemented as Gamma(x). At integer values n of the argument, Gamma(n) is computed exactly. Because of overflow, it only makes sense to compute exact integer factorials for small numbers n. Currently a warning message is printed if a factorial of n>65535 is requested.
For half-integer arguments Gamma(x) is also computed exactly, using the following identities (here n is a nonnegative integer and we use the factorial notation):
If the factorial of a large integer or half-integer n needs to be computed not exactly but only with a certain floating-point precision, it is faster (for large enough Abs(n)) not to evaluate an exact integer product, but to use the floating-point numerical approximation. This method is currently not implemented in Yacas.
There is also the famous Stirling's asymptotic formula for large factorials,
Repeated partial integration gives the expansion
The method gives the Gamma-function only for arguments with positive real part; at negative values of the real part of the argument, the Gamma-function is computed via the identity
The Lanczos-Spouge approximation formula depends on a parameter a,
The coefficients c[k] and the parameter a can be chosen to achieve a greater precision of the approximation formula. However, the recipe for the coefficients c[k] given in the paper by Lanczos is too complicated for practical calculations in arbitrary precision: the time it would take to compute the array of N coefficients c[k] grows as N^3. Therefore it is better to use less precise but much simpler formulae derived by Spouge.
At version 1.0.57, Yacas is limited in its internal arbitrary precision facility that does not support true floating-point computation but rather uses fixed-point logic; this hinders precise calculations with floating-point numbers. (This concern does not apply to Yacas linked with gmp.) In the current version of the GammaNum() function, two workarounds are implemented. First, a Horner scheme is used to compute the sum; this is somewhat faster and leads to smaller round-off errors. Second, intermediate calculations are performed at 40% higher precision than requested. This is much slower but allows to obtain results at desired precision.
If strict floating-point logic is used, the working precision necessary to compensate for the cancellations must be 1.1515*P digits for P digits of the result. This can be shown as follows.
The sum converges to a certain value S which is related to the correct value of the Gamma function at z. After some algebra we find that S is of order Sqrt(a) if z>a and of order a^(1/2-z) if a>z. Since a is never a very large number, we can consider the value of S to be roughly of order 1, compared with exponentially large values of some of the terms c[k] of this sum. The magnitude of a coefficient c[k] is estimated by Stirling's formula,
For a given (large) value of Abs(x), the terms of this series decrease at first, but then start to grow. (Here x can be a complex number.) There exist estimates for the error term of the asymptotic series (see [Abramowitz et al. 1964], 6.1.42). Roughly, the error is of the order of the first discarded term.
We can estimate the magnitude of the terms using the asymptotic formula for the Bernoulli numbers (see below). After some algebra, we find that the value of n at which the series starts to grow and diverge is n[0]<=>Pi*Abs(x)+2. Therefore at given x we can only use the asymptotic series up to the n[0]-th term.
For example, if we take x=10, then we find that the 32-nd term of the asymptotic series has the smallest magnitude (about 10^(-28)) but the following terms start to grow.
To be on the safe side, we should drop a few more terms from the series. Define the number of terms by n[0]:=Pi*Abs(x). Then the order of magnitude of the n[0]-th term is Exp(-2*Pi*Abs(x))/(2*Pi^2*Abs(x)). This should be compared with the magnitude of the sum of the series which is of order Abs(x*Ln(x)). We find that the relative precision of P decimal digits or better is achieved if
For very large P, the inequality is satisfied when roughly x>P*Ln(10)/Ln(P). Assuming that the Bernoulli numbers are precomputed, the complexity of this method is that of computing a Taylor series with n[0] terms, which is roughly O(Sqrt(P))*M(P).
What if x is not large enough? Using the identity x*Gamma(x)=Gamma(x+1), we can reduce the computation of Gamma(x) to Gamma(x+M) for some integer M. Then we can choose M to be large enough so that the asymptotic series gives the required precision when evaluated at x+M. We shall have to divide the result M times by some long numbers to obtain Gamma(x). Therefore, the complexity of this method for given (x, P) is increased by M(P)*(P*Ln(10)/Ln(P)-x). For small x this will be the dominant contribution to the complexity.
On the other hand, if the Bernoulli numbers are not available precomputed, then their calculation dominates the complexity of the algorithm.
This method works well when 1<=x<2 (other values of x need to be reduced first). The idea is to represent the Gamma function as a sum of two integrals,
The first integral in this equation can be found as a sum of the Taylor series (expanding Exp(-u) near u=0),
Now we can estimate the number of terms in the above series. We know that the value of the Gamma function is of order 1. The condition that n-th term of the series is smaller than 10^(-P) gives n*Ln(n/e*M)>P*Ln(10). With the above value for M, we obtain n=P*Ln(10)/W(1/e) where W is Lambert's function; W(1/e)<=>0.2785.
The terms of the series are however not monotonic: first the terms grow and then they start to decrease, like in the Taylor series for the exponential function evaluated at a large argument. The ratio of the ( k+1)-th term to the k-th term is approximately M/(k+1). Therefore the terms with k<=>M will be the largest and will have the magnitude of order M^M/M! <=>Exp(M)<=>10^P. In other words, we will be adding and subtracting large numbers with P digits before the decimal point, but we need to obtain a result with P digits after the decimal point. Therefore to avoid the round-off error we need to increase the working precision to 2*P floating-point decimal digits.
It is quicker to compute this series if x is a small rational number, because then the long multiplications can be avoided, or at high enough precision the binary splitting can be used. Calculations are also somewhat faster if M is chosen as an integer value.
If the second integral is approximated by an asymptotic series instead of a constant Exp(-M), then it turns out that the smallest error of the series is Exp(-2*M). Therefore we can choose a smaller value of M and the round-off error gets somewhat smaller. According to [Brent 1978], we then need only 3/2*P digits of working precision, rather than 2*P, for computing the first series (and only P/2 digits for computing the second series). However, this computational savings may not be significant enough to justify computing a second series.
The basic formulae for the "fast" method (Brent's method "B1") are:
First, the sequence H[n] is defined as the partial sum of the harmonic series:
According to [Brent et al. 1980], the error of this approximation of gamma, assuming that S(n) and V(n) are computed exactly, is
The required number of terms k[max] in the summation over k to get S(n) and V(n) with this precision can be approximated as usual via Stirling's formula. It turns out that k[max] is also proportional to the number of digits, k[max]<=>2.07*P.
Therefore, this method of computing gamma has "linear convergence", i.e. the number of iterations is linear in the number of correct digits we need in the result. Of course, all calculations need to be performed with the working precision. The working precision must be a few digits more than P because we accumulate about Ln(k[max])/Ln(10) digits of round-off error by performing k[max] arithmetic operations.
Brent mentions a small improvement on this method (his method "B3"). It consists of estimating the error of the approximation of gamma by an asymptotic series. Denote W(n) the function
This trick can be formulated for any sequence A[k] of the form A[k]=B[k]*C[k], where the sequences B[k] and C[k] are given by the recurrences B[k]=p(k)*B[k-1] and C[k]=q(k)+C[k-1]. Here we assume that p(k) and q(k) are known functions of k that can be computed to P digits using O(P) operations, e.g. rational functions with short constant coefficients. Instead of evaluating B[k] and C[k] separately and multiplying them using a long multiplication, we note that p(k)*A[k-1]=B[k]*C[k-1]. This allows to compute A[k] by using the following two recurrences:
Also, it turns out that we can use a variant of the fast "rectangular method" to evaluate the series for U(n) and V(n) simultaneously. (We can consider these series as Taylor series in n^2.) This however does not speed up the evaluation of gamma. This happens because the rectangular method requires long multiplications and leads in this case to increased round-off errors. The rectangular method for computing a power series in x is less efficient than a straightforward computation when x is a "short" rational or integer number.
The "rectangular method" for computing Sum(k,0,N,x^k*A[k]) needs to be able to convert a coefficient of the Taylor series into the next coefficient A[k+1] by "short" operations, more precisely, by some multiplications and divisions by integers of order k. The j-th column of the rectangle ( j=0, 1, ...) consists of numbers x^(r*j)*A[r*j], x^(r*j)*A[r*j+1], ..., x^r*A[r*j+r-1]. The numbers of this column are computed sequentially by short operations, starting from the x^(r*j)*A[j*r] which is known from the end of the previous column. The recurrence relation for A[k] is not just some multiplication by rational numbers, but also contains an addition of B[k]. However, if we also use the rectangular method for V(n), the number x^(r*j)*B[r*j] will be known and so we will be able to use the recurrence relation to get x^(r*j)*A[r*j+1] and all following numbers of the column.
To obtain P decimal digits of relative precision, we need to take at most P*Ln(10)/Ln(4) terms of the series. The sum can be efficiently evaluated using Horner's scheme, for example
A drawback of this scheme is that it requires a separate high-precision computation of Pi, Sqrt(3) and of the logarithm.
This method combined with Brent's summation trick (see the section on the Euler constant) was used in [Fee 1990]. Brent's trick allows to avoid a separate computation of the harmonic sum and all long multiplications. Catalan's constant is obtained as a limit of G[k] where G[0]=B[0]=1/2 and
A third formula is more complicated but the convergence is much faster and there is no need to evaluate any other transcendental functions. This formula is called "Broadhurst's series".
We need to take only P*Ln(10)/Ln(16) terms of the first series and P*Ln(10)/Ln(4096) terms of the second series. However, each term is about six times more complicated than one term of the first method's series. So there are no computational savings (unless Ln(x) is excessively slow).
The classic book [Bateman et al. 1953], vol. 1, describes many results concerning the properties of Zeta(s).
For the numerical evaluation of Riemann's Zeta function with arbitrary precision to become feasible, one needs special algorithms. Recently P. Borwein [Borwein 1995] gave a simple and quick approximation algorithm for Re(s)>0. See also [Borwein et al. 1999] for a review of methods.
It is the "third" algorithm (the simplest one) from P. Borwein's paper which is implemented in Yacas. The approximation formula valid for Re(s)> -(n-1) is
This method requires to compute n times the exponential and the logarithm to find the power (j+1)^(-s). This power can be computed in asymptotic time O(M(P)*Ln(P)), unless s is an integer, in which case this computation is O(M(P)), the cost of one division by an integer (j+1)^s. Therefore the complexity of this method is at most O(P*M(P)*Ln(P)).
The function Zeta(s) calls ZetaNum(s) to compute this approximation formula for Re(s)>1/2 and uses the identity above to get the value for other s.
For very large values of s, it is faster to use more direct methods implemented in the routines ZetaNum1(s,N) and ZetaNum2(s,N). If the required precision is P digits and s>1+Ln(10)/Ln(P)*P, then it is more efficient to compute the defining series for Zeta(s),
Alternatively, one can use ZetaNum2(n,N) which computes the infinite product over prime numbers p[i]
The value Zeta(3), also known as the Apery's constant, can be computed using the following geometrically convergent series:
For other odd integers n there is no general analogous formula. The corresponding expressions for Zeta(5) and Zeta(7) are
In these series the term Bin(2*k,k) grows approximately as 4^k and therefore one can take no more than P*Ln(10)/Ln(4) terms in the series to get P decimal digits of relative precision.
For odd integer n there are the following special relations: for n:=Mod(3,4),
These relations contain geometrically convergent series, and it suffices to take P*Ln(10)/(2*Pi) terms to obtain P decimal digits of relative precision.
Finally, [Kanemitsu et al. 2001] gave a curious formula for the values of Riemann's Zeta function at rational values between 0 and 2 (their "Corollary 1"). This formula is very complicated but contains a geometrically convergent series.
We shall have to define several auxiliary functions to make the formula more understandable. We shall be interested in the values of Zeta(p/N) where p, N are integers. For integer h, N such that 0<=h<=N, and for arbitrary real x,
Practical calculations using this formula are of the same asymptotic complexity as Borwein's method above. (It is not clear whether this method has a significant computational advantage.) The value of x can be chosen at will, so we should find such x as to minimize the cost of computation. There are two series to be computed: the terms in the first one decay as Exp(-n^N*x) while the terms in the second one (containing f) decay only as
For a target precision of P decimal digits, the required numbers of terms n[1], n[2] for the first and the second series can be estimated as n[1]<=>((P*Ln(10))/x)^(1/N), n[2]<=>x/(2*Pi)*((P*Ln(10))/(2*Pi))^N. (Here we assume that N, the denominator of the fraction p/N, is at least 10. This scheme is impractical for very large N because it requires to add O(N) slightly different variants of the second series.) The total cost is proportional to the time it takes to compute Exp(x) or Cos(x) and to roughly n[1]+N*n[2]. The value of x that minimizes this cost function is approximately
Asymptotics of Lambert's W function are
Here are some inequalities to help estimate W(x) at large x (more exactly, for x>e):
One can also find uniform rational approximations, e.g.:
There exists a uniform approximation of the form
The numerical procedure uses Halley's method. Halley's iteration for the equation W*Exp(W)=x can be written as
The initial value is computed using one of the uniform approximation formulae. The good precision of the uniform approximation guarantees rapid convergence of the iteration scheme to the correct root of the equation, even for complex arguments x.
For large values of Abs(x), there is the following asymptotic series:
The error of a truncated asymptotic series is not larger than the first discarded term if the number of terms is larger than n-1/2. (See the book [Olver 1974] for derivations. It seems that each asymptotic series requires special treatment and yet in all cases the error is about the same as the first discarded term.)
Currently Yacas can compute BesselJ(n,x) for all x where n is an integer and for Abs(x)<=2*Gamma(n) when n is a real number. Yacas currently uses the Taylor series when Abs(x)<=2*Gamma(n) to compute the numerical value:
If Abs(x)>2*Gamma(n) and n is an integer, then Yacas uses the forward recurrence relation:
We see from the definition that when Abs(x)<=2*Gamma(n), the absolute value of each term is always decreasing (which is called absolutely monotonely decreasing). From this we know that if we stop after i iterations, the error will be bounded by the absolute value of the next term. So given a set precision, turn this into a value epsilon, so that we can check if the current term will contribute to the sum at the prescribed precision. Before doing this, Yacas currently increases the precision by 20% to do interim calculations. This is a heuristic that works, it is not backed by theory. The value epsilon is given by epsilon:=5*10^(-prec), where prec was the previous precision. This is directly from the definition of floating point number which is correct to prec digits: A number correct to prec digits has a rounding error no greater than 5*10^(-prec). Beware that some books incorrectly have .5 instead of 5.
Bug: Something is not right with complex numbers, but pure imaginary are OK.
Bernoulli numbers and polynomials are used in various Taylor series expansions, in the Euler-Maclauren series resummation formula, in Riemann's Zeta function and so on. For example, the sum of (integer) p-th powers of consecutive integers is given by
The Bernoulli polynomials B(x)[n] can be found by first computing an array of Bernoulli numbers up to B[n] and then applying the above formula for the coefficients.
We consider two distinct computational tasks: evaluate a Bernoulli number exactly as a rational, or find it approximately to a specified floating-point precision. There are also two possible problem settings: either we need to evaluate all Bernoulli numbers B[n] up to some n (this situation occurs most often in practice), or we only need one isolated value B[n] for some large n. Depending on how large n is, different algorithms can be chosen in these cases.
Here is an estimate of the cost of BernoullliArray. Suppose M(P) is the time needed to multiply P-digit integers. The required number of digits P to store the numerator of B[n] is asymptotically P<>n*Ln(n). At each of the n iterations we need to multiply O(n) large rational numbers by large coefficients and take a GCD to simplify the resulting fractions. The time for GCD is logarithmic in P. So the complexity of this algorithm is O(n^2*M(P)*Ln(P)) with P<>n*Ln(n).
For large (even) values of the index n, a single Bernoulli number B[n] can be computed by a more efficient procedure: the integer part and the fractional part of B[n] are found separately (this method is also well explained in [Gourdon et al. 2001]).
First, by the theorem of Clausen -- von Staudt, the fractional part of (-B[n]) is the same as the fractional part of the sum of all inverse prime numbers p such that n is divisible by p-1. To illustrate the theorem, take n=10 with B[10]=5/66. The number n=10 is divisible only by 1, 2, 5, and 10; this corresponds to p=2, 3, 6 and 11. Of these, 6 is not a prime. Therefore, we exclude 6 and take the sum 1/2+1/3+1/11=61/66. The theorem now says that 61/66 has the same fractional part as -B[10]; in other words, -B[10]=i+f where i is some unknown integer and the fractional part f is a nonnegative rational number, 0<=f<1, which is now known to be 61/66. Indeed -B[10]= -1+61/66. So one can find the fractional part of the Bernoulli number relatively quickly by just checking the numbers that might divide n.
Now one needs to obtain the integer part of B[n]. The number B[n] is positive if Mod(n,4)=2 and negative if Mod(n,4)=0. One can use Riemann's Zeta function identity for even integer values of the argument and compute the value zeta(n) precisely enough so that the integer part of the Bernoulli number is determined. The required precision is found by estimating the Bernoulli number from the same identity in which one approximates zeta(n)=1, i.e.
At such large values of the argument n, it is feasible to use the routines ZetaNum1(n, N) or ZetaNum2(n,N) to compute the zeta function. These routines approximate zeta(n) by the defining series
For example, let us compute B[20] using this method.
In> 1/2 + 1/3 + 1/5 + 1/11; Out> 371/330; |
In> N(1+1/2^20) Out> 1.0000009536; |
In> N( 2*20! /(2*Pi)^20*1.0000009536 ) Out> 529.1242423667; |
In> -(529+41/330); Out> -174611/330; |
All these steps are implemented in the routine Bernoulli1. The variable Bernoulli1Threshold determines the smallest n for which B[n] is to be computed via this routine instead of the recursion relation. Its current value is 20.
The complexity of Bernoulli1 is estimated as the complexity of finding all primes up to n plus the complexity of computing the factorial, the power and the Zeta function. Finding the prime numbers up to n by checking all potential divisors up to Sqrt(n) requires O(n^(3/2)*M(Ln(n))) operations with precision O(Ln(n)) digits. For the second step we need to evaluate n!, Pi^n and zeta(n) with precision of P=O(n*Ln(n)) digits. The factorial is found in n short multiplications with P-digit numbers (giving O(n*P)), the power of pi in Ln(n) long multiplications (giving O(M(P)*Ln(n))), and ZetaNum2(n) (the asymptotically faster algorithm) requires O(n*M(P)) operations. The Zeta function calculation dominates the total cost because M(P) is slower than O(P). So the total complexity of Bernoulli1 is O(n*M(P)) with P<>n*Ln(n).
Note that this is the cost of finding just one Bernoulli number, as opposed to the O(n^2*M(P)*Ln(P)) cost of finding all Bernoulli numbers up to B[n] using the first algorithm BernoulliArray. If we need a complete table of Bernoulli numbers, then BernoulliArray is only marginally (logarithmically) slower. So for finding complete Bernoulli tables, Bernoulli1 is better only for very large n.
However, the recurrence relation used in BernoulliArray turns out to be numerically unstable and needs to be replaced by another [Brent 1978]. Brent's algorithm computes the Bernoulli numbers divided by factorials, C[n]:=B[2*n]/(2*n)! using a (numerically stable) recurrence relation
The numerical instability of the usual recurrence relation
The eigenvalue of the sequence e[k] can be found approximately for large k if we notice that the recurrence relation for e[k] is similar to the truncated Taylor series for Sin(x). Substituting e[k]=lambda^k into it and disregarding a very small number (2*Pi)^(-2*k) on the right hand side, we find
By a very similar calculation one finds that the inverse powers of 4 in Brent's recurrence make the largest eigenvalue of the error sequence e[k] almost equal to 1 and therefore the recurrence is stable. Brent gives the relative error in the computed C[k] as O(k^2) times the round-off error in the last digit of precision.
The complexity of Brent's method is given as O(n^2*P+n*M(P)) for finding all Bernoulli numbers up to B[n] with precision P digits. This computation time can be achieved if we compute the inverse factorials and powers of 4 approximately by floating-point routines that know how much precision is needed for each term in the recurrence relation. The final long multiplication by (2*k)! computed to precision P adds M(P) to each Bernoulli number.
The non-iterative method using the Zeta function does not perform much better if a Bernoulli number B[n] has to be computed with significantly fewer digits P than the full O(n*Ln(n)) digits needed to represent the integer part of B[n]. (The fractional part of B[n] can always be computed relatively quickly.) The Zeta function needs 10^(P/n) terms, so its complexity is O(10^(P/n)*M(P)) (here by assumption P is not very large so 10^(P/n)<n/(2*Pi*e); if n>P we can disregard the power of 10 in the complexity formula). We should also add O(Ln(n)*M(P)) needed to compute the power of 2*Pi. The total complexity of Bernoulli1 is therefore O(Ln(n)*M(P)+10^(P/n)*M(P)).
If only one Bernoulli number is required, then Bernoulli1 is always faster. If all Bernoulli numbers up to a given n are required, then Brent's recurrence is faster for certain (small enough) n.
Currently Brent's recurrence is implemented as BernoulliArray1() but it is not used by Bernoulli because the internal arithmetic is not yet able to correctly compute with floating-point precision.
The complementary error function Erfc(x) is defined for real x as
The imaginary error function Erfi(x) is defined for real x as
Numerical computation of the error function Erf(z) needs to be performed by different methods depending on the value of z and its position in the complex plane, and on the required precision. We follow the book [Tsimring 1988] and the paper [Thacher 1963]. (These texts, however, do not describe arbitrary-precision computations.)
The function Erf(z) has the following approximations that are useful for its numerical computation:
Here we shall analyze the convergence and precision of these methods. We need to choose a good method to compute Erf(z) with (relative) precision P decimal digits for a given (complex) number z, and to obtain estimates for the necessary number of terms to take.
Both Taylor series converge absolutely for all z, but they do not converge uniformly fast; in fact these series are not very useful for large z because a very large number of slowly decreasing terms gives a significant contribution to the result, and the round-off error (especially for the first series with the alternating signs) becomes too high. Both series converge well for Abs(z)<1.
Consider method 1 (the first Taylor series). We shall use the method 1 only for Abs(z)<=1. If the absolute error of the truncated Taylor series is estimated as the first discarded term, the precision after taking all terms up to and including z^(2*n) is approximately z^(2*n+2)/(n+2)!. The factorial can be approximated by Stirling's formula, n! <=>n^n*e^(-n). The value of Erf(z) at small z is of order 1, so we can take the absolute error to be equal to the relative error of the series that starts with 1. Therefore, to obtain P decimal digits of precision, we need the number of terms n that satisfies the inequality
Consider method 3 (the asymptotic series). Due to limitations of the asymptotic series, we shall use the method 3 only for large enough values of z and small enough precision.
There are two important cases when calculating Erf(z) for large (complex) z: the case of z^2>0 and the case of z^2<0. In the first case (e.g. a real z), the function Erf(z) is approximately 1 for large Abs(z) (if Re(z)>0, and approximately -1 if Re(z)<0). In the second case (e.g. pure imaginary z=I*t) the function Erf(z) rapidly grows as Exp(-z^2)/z at large Abs(z).
[Ahlgren et al. 2001] S. Ahlgren and K. Ono, Addition and counting: the arithmetic of partitions, Notices of the AMS 48 (2001), p. 978.
[Bailey et al. 1997] D. H. Bailey, P. B. Borwein, and S. Plouffe, On The Rapid Computation of Various Polylogarithmic Constants, Math. Comp. 66 (1997), p. 903.
[Bateman et al. 1953] Bateman and Erdelyi, Higher Transcendental Functions, McGraw-Hill, 1953.
[Beeler et al. 1972] M. Beeler, R. W. Gosper, and R. Schroeppel, Memo No. 239, MIT AI Lab (1972), now available online (the so-called "Hacker's Memo" or "HAKMEM").
[Borwein 1995] P. Borwein, An efficient algorithm for Riemann Zeta function (1995), published online and in Canadian Math. Soc. Conf. Proc., 27 (2000), pp. 29-34.
[Borwein et al. 1999] J. M. Borwein, D. M. Bradley, R. E. Crandall, Computation strategies for the Riemann Zeta function, online preprint CECM-98-118 (1999).
[Brent 1975] R. P. Brent, Multiple-precision zero-finding methods and the complexity of elementary function evaluation, in Analytic Computational Complexity, ed. by J. F. Traub, Academic Press, 1975, p. 151; also available online from Oxford Computing Laboratory, as the paper rpb028.
[Brent 1976] R. P. Brent, The complexity of multiple-precision arithmetic, Complexity of Computation Problem Solving, 1976; R. P. Brent, Fast multiple-precision evaluation of elementary functions, Journal of the ACM 23 (1976), p. 242.
[Brent 1978] R. P. Brent, A Fortran Multiple-Precision Arithmetic Package, ACM TOMS 4, no. 1 (1978), p. 57.
[Brent et al. 1980] R. P. Brent and E. M. McMillan, Some new algorithms for high precision computation of Euler's constant, Math. Comp. 34 (1980), p. 305.
[Crenshaw 2000] J. W. Crenshaw, MATH Toolkit for REAL-TIME Programming, CMP Media Inc., 2000.
[Damgard et al. 1993] I. B. Damgard, P. Landrock and C. Pomerance, Average Case Error Estimates for the Strong Probable Prime Test, Math. Comp. 61, (1993) pp. 177-194.
[Davenport et al. 1989] J. H. Davenport, Y. Siret, and E. Tournier, Computer Algebra, systems and algorithms for algebraic computation, Academic Press, 1989.
[Davenport 1992] J. H. Davenport, Primality testing revisited, Proc. ISSAC 1992, p. 123.
[Fee 1990] G. Fee, Computation of Catalan's constant using Ramanujan's formula, Proc. ISSAC 1990, p. 157; ACM, 1990.
[Godfrey 2001] P. Godfrey (2001) (unpublished text): http://winnie.fit.edu/~gabdo/gamma.txt .
[Gourdon et al. 2001] X. Gourdon and P. Sebah, The Euler constant; The Bernoulli numbers; The Gamma Function; The binary splitting method; and other essays, available online at http://numbers.computation.free.fr/Constants/ (2001).
[Haible et al. 1998] B. Haible and T. Papanikolaou, Fast Multiprecision Evaluation of Series of Rational Numbers, LNCS 1423 (Springer, 1998), p. 338.
[Johnson 1987] K. C. Johnson, Algorithm 650: Efficient square root implementation on the 68000, ACM TOMS 13 (1987), p. 138.
[Kanemitsu et al. 2001] S. Kanemitsu, Y. Tanigawa, and M. Yoshimoto, On the values of the Riemann zeta-function at rational arguments, The Hardy-Ramanujan Journal 24 (2001), p. 11.
[Karp et al. 1997] A. H. Karp and P. Markstein, High-precision division and square root, ACM TOMS, vol. 23 (1997), p. 561.
[Knuth 1973] D. E. Knuth, The art of computer programming, Addison-Wesley, 1973.
[Lanczos 1964] C. J. Lanczos, J. SIAM of Num. Anal. Ser. B, vol. 1, p. 86 (1964).
[Luke 1975] Y. L. Luke, Mathematical functions and their approximations, Academic Press, N. Y., 1975.
[Olver 1974] F. W. J. Olver, Asymptotics and special functions, Academic Press, 1974.
[Pollard 1978] J. Pollard, Monte Carlo methods for index computation mod p, Mathematics of Computation, vol. 32 (1978), pp. 918-924.
[Pomerance et al. 1980] Pomerance et al., Math. Comp. 35 (1980), p. 1003.
[Rabin 1980] M. O. Rabin, Probabilistic algorithm for testing primality, J. Number Theory 12 (1980), p. 128.
[Smith 1989] D. M. Smith, Efficient multiple-precision evaluation of elementary functions, Math. Comp. 52 (1989), p. 131.
[Smith 2001] D. M. Smith, Algorithm 814: Fortran 90 software for floating-point multiple precision arithmetic, Gamma and related functions, ACM TOMS 27 (2001), p. 377.
[Spouge 1994] J. L. Spouge, J. SIAM of Num. Anal. 31 (1994), p. 931.
[Sweeney 1963] D. W. Sweeney, Math. Comp. 17 (1963), p. 170.
[Thacher 1963] H. C. Thacher, Jr., Algorithm 180, Error function for large real X, Comm. ACM 6, no. 6 (1963), p. 314.
[Tsimring 1988] Sh. E. Tsimring, Handbook of special functions and definite integrals: algorithms and programs for calculators, Radio and communications (publisher), Moscow (1988) (in Russian).
[von zur Gathen et al. 1999] J. von zur Gathen and J. Gerhard, Modern Computer Algebra, Cambridge University Press, 1999.
59 Temple Place, Suite 330 Boston, MA, 02111-1307 USA |
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
This License is a kind of ``copyleft'', which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
A ``Modified Version'' of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A ``Secondary Section'' is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (For example, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The ``Invariant Sections'' are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License.
The ``Cover Texts'' are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License.
A ``Transparent'' copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, whose contents can be viewed and edited directly and straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup has been designed to thwart or discourage subsequent modification by readers is not Transparent. A copy that is not ``Transparent'' is called ``Opaque''.
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML designed for human modification. Opaque formats include PostScript, PDF, proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML produced by some word processors for output purposes only.
The ``Title Page'' means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, ``Title Page'' means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a publicly-accessible computer-network location containing a complete Transparent copy of the Document, free of added material, which the general network-using public has access to download anonymously at no charge using public-standard network protocols. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section entitled ``Endorsements'', provided it contains nothing but endorsements of your Modified Version by various parties -- for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections entitled ``History'' in the various original documents, forming one section entitled ``History''; likewise combine any sections entitled ``Acknowledgements'', and any sections entitled ``Dedications''. You must delete all sections entitled ``Endorsements.''
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one quarter of the entire aggregate, the Document's Cover Texts may be placed on covers that surround only the Document within the aggregate. Otherwise they must appear on covers around the whole aggregate.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
Copyright (C) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST. A copy of the license is included in the section entitled ``GNU Free Documentation License''. |
If you have no Invariant Sections, write ``with no Invariant Sections'' instead of saying which ones are invariant. If you have no Front-Cover Texts, write ``no Front-Cover Texts'' instead of ``Front-Cover Texts being LIST''; likewise for Back-Cover Texts.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.