# .

I am trying to make some simulations of chaotic systems, for instance X(k) = 4 X(k) (1 - X(k-1)) but I noticed that for all these systems, the loss of precision propagates exponentially, to the point that after 50 iterations, all values generated are completely wrong. I wrote some code in Perl using the BigNum library (providing hundreds of decimals accuracy) and it shows how dramatic standard arithmetic fails in this context.

You can check out the context, my code, and an Excel spreadsheet that illustrates the issue, here.

I am looking for a piece of code in Python that could nicely do the job, probably using some kind of BigNum library in Python? Anyone can make recommendations, or re-write my code in Python? Alternatively, how could this be done in R?

Thank you! For arbitrary precision in many programming languages, check out this reference. Not sure if it is up-to-date and correct, but could not find anything about R. Some of these packages are not truly "arbitrary precision." More on this (for Python) here.

Views: 7133

### Replies to This Discussion

I don't see any way to do it in R or Python in a way that would execute quickly enough. Two possible alternatives:

1. Write your routine in C# using the BigInteger class, which has pretty much everything you need to do math on integers of unlimited size. I haven't tried it, but I understand that it is now possible to link C# routines in .NET to R.
2. Write from scratch in C and link to R. There is a mechanism for doing this already in R. Knuth's TAOCP ch 4 goes through the classical algorithms for arithmetic on big integers.

Why do you need such big integers? I find that R's standard RNG -- a Mersenne twister -- or L'Ecuyer's WELL package in R work for me.

Hi David,

My goal here is to illustrate high precision computing, in a context where many scientists are not aware that their simulations are wrong after just 50 iterations (though, due to the nature of the process, it does not really matter.)

I am aware of the Mersenne twister, and yes it is very good for random number generation. Would love to see how R and Python compare in terms of high precision computing, regardless of the application. Yep, C has nice libraries too for that purpose.

Thanks,

Vincent

As for Python, someone on Reddit posted the following, with seed = 3/10: Now, if I run that I get: With fixed being the result using float, arbitrary being decimal.Decimal, and diff being what I believe represents the accumulated rounding error. That said I can't really run your Perl, and I'm not sure I haven't missed something transcribing your code, and since we're starting with different seeds the values in your spreadsheet bear no relation.

My comment is that over the 10,000 iterations, the difference at any given iteration, is much bigger than 0.1229, on average. The difference was computed only at iteration 10,000 in the above Python code.

Here's a Java version.  You should find your own implementation of bigSqrt.  This Java version diverges from the correct answer at iteration #30.  I was hoping it would be better.  Unfortunately Java does not have a built-in version of square root for BigDecimal.

`public static void main(String[] args) {    BigDecimal pi = new BigDecimal(4 * atan2(1,1));    BigInteger bigint = new BigInteger("10").pow(50);    BigDecimal seed = pi.divide(new BigDecimal(11), MathContext.DECIMAL128);    BigDecimal z = seed;    long k;    String zp;    for (k=1; kspan>10000; k++) {        z = z.multiply(new BigDecimal("4")).multiply(new BigDecimal("1").subtract(z));        z = bigSqrt(z, MathContext.DECIMAL128);        z = new BigDecimal(bigint).multiply(z).add(new BigDecimal(".5")).setScale(0, RoundingMode.CEILING).divide(new BigDecimal(bigint));        zp = String.format("%.10f", z);        System.out.println(k);        System.out.println(zp);    }}`

Using the Apfloat Java JAR library available at http://www.apfloat.org/apfloat_java/, you can generate very high precision numbers in Java code (hundreds or thousands of digits or any arbitrary number of digits, computation speed will degrade as you need more precision obviously).  Below is my Java code generating numbers using the well known iterative function x(k+1) = 4*x(k)*(1-x(k)).  You can compare numbers using this iteration vs. the actual known kth-element solution of pseudo-code formula of sin((2^k)*init*PI)^2.  The caret represents exponentiation.  Interesting that the sine function degrades our precision in the estimated and actual computation since we are taking the sine of a larger and larger number for each iteration (due to the 2^k part in the formula).  Here's my Java code:

`public class Main {    public static void main(String[] args) {        long k;        String zp;        long prec = 30;        long highprec = 1000;        Apfloat seed = ApfloatMath.pi(prec).divide(new Apfloat(11,prec));        Apfloat estimate = seed;        Apfloat init = new Apfloat(1).divide(ApfloatMath.pi(highprec)).multiply(ApfloatMath.asin(ApfloatMath.sqrt(ApfloatMath.pi(highprec).divide(new Apfloat(11)))));        for (k=1; kspan>100; k++) {            estimate = new Apfloat(4).multiply(estimate).multiply(new Apfloat(1).subtract(estimate));            Apfloat actual = ApfloatMath.pow(ApfloatMath.sin(init.multiply(ApfloatMath.pi(highprec).multiply(ApfloatMath.pow(new Apfloat(2),new Apfloat(k,highprec))))),2);            System.out.println(k);            System.out.println(String.format("%#s", estimate));            System.out.println(String.format("%#s", actual));        }    }}`

Using Python's mpmath (arbitrary precision) library, with the seed pi/11 and 10000,20000,30000,45000 and 50000 iterations, for 1000,10000 and 100000 decimal places precision. Notice that the decimal places of the difference between 10000 and 100000 decimal-place precision results depend almost linearly on the number of iterations. As the number of iterations grow, precision is quickly destroyed.

Attachments:

R user here.

Browsing through an old paper lead my to look at the chaotic dynamic of \$x_i = 4 * x_{i-1}*( 1 -  x_{i-1})\$. The paper suggests looking at the cumulative sums of \$x - .5\$, i.e.: \$P_i = P_{i-1} + x_{i-1} - .5\$, so I did, but I computed \$x-.5\$ in two ways. One, by using the recursion above, two, by writing the recursion of \$y_i = x_i - .5\$ as \$y_i = -.5 + 4*(.25 - y_i^2)\$. The results are different, and by plotting the cumulated processes it is clear that after a few iterations the differences are noticeable.

R has the package Rmpfr to run computations using the MPFR C library.

Below is my R snippet. Varying the precision D one see the effect of precision on the computations. Taking D=1000 gives un-discernible results for both ways of computing the dynamics for 1000 iterations.

#--------------------------------------

require("Rmpfr")

#----------- precision in bits
D = 100

#----------- number of iterations
N=1000

a=4

#----------- CHAOTIC process, and cumulative process
X=mpfr(rep(0,N), D )
X=.3

P = mpfr(rep(0,N), D )
P = 1000

for(i in 2:N ){ X[i] = a*X[i-1]*(1-X[i-1])
P[i] = P[i-1] + X[i-1] - .5}

#----------- CHAOTIC process centered, and cumulative process
dP = mpfr(rep(0,N), D )
dP = X - .5

P2 = mpfr(rep(0,N), D )
P2 = 1000

for(i in 2:N ){ dP[i] = a*(.25 - dP[i-1]^2) -.5
P2[i] = P2[i-1] + dP[i-1]}

#----------- PLOT
windows(8,8)
plot(1:1000, P, type='l', col='blue')
points(1:1000, P2, type='l', col='red')

#--------------------------------------------

Here is code in R. I understand that the initial value of X is X, and the code outputs the first few digits of X[10,000], starting with initial value 0.3.

According to a previous post, the values obtained form a (non-independent) sample drawn from the density

f(u) = 1/ ( 2* sqrt(1-u) ) .

The histogram of {X, ..., X} looks like a sample drawn from f.

#------------------------------

require("Rmpfr")

#--- precision in bits
D = 20000

#--- number of iterations
N = 10001

#---- CHAOTIC process
X=mpfr(rep(0,N), D )
X=.3

for(i in 2:N ){ X[i] = sqrt( 4*X[i-1]*(1-X[i-1]) ) }

#------------------ Terminal points
Z = floor( .5 + 10^50*X[9999:10001] )/10^50

format(Z, digits= 10)
roundMpfr( Z, precBits = 40)
#  "0.2929369280" "0.9102194993" "0.5717340724"
# 3 'mpfr' number of precision 40 bits
#  0.2929369280459 0.9102194993147 0.5717340723877

#------------------------- Histogram and theoretical density

u=0:100/101

windows(8,8)

hist(as.numeric(X), col='blue',
breaks=50, probability = TRUE,
xlab = 'u' , ylab = 'y (Density)' ,
main = 'Histogram of values and theoretical density')
points(u, (1-u)^(-.5)/2, type='l', col='red', lwd=3)

text(.2, 1.5, expression(y == frac(1, 2*sqrt(1-u))))