Home » Uncategorized

Simple Trick to Dramatically Improve Speed of Convergence

We discuss a simple trick to significantly accelerate the convergence of an algorithm when the error term decreases in absolute value over successive iterations, with the error term oscillating (not necessarily periodically) between positive and negative values. 

We first illustrate the technique on a well known and simple case: the computation of log 2 using its well know, slow-converging series. We then discuss a very interesting and more complex case, before finally focusing on a more challenging example in the context of probabilistic number theory and experimental math.

The technique must be tested for each specific case to assess the improvement in convergence speed. There is no general, theoretical rule to measure the gain, and if the error term does not oscillate in a balanced way between positive and negative values, this technique does not produce any gain. However, in the examples below, the gain was dramatic. 

4768157670

Let’s say you run an algorithm, for instance gradient descent. The input (model parameters) is x, the output if f(x), for instance a local optimum. We consider f(x) to be univariate, but it easily generalizes to the multivariate case, by applying the technique separately for each component. At iteration k, you obtain an approximation f(kx) of f(x), and the error is E(kx) = f(x) – f(kx). The total number of iterations is N. starting with first iteration k = 1.  

The idea consists in first running the algorithm as is, and then compute the “smoothed” approximations, using the following m steps.

Read the full article here.

Content

  • General framework and simple illustration
  • A strange function
  • Even stranger functions

Leave a Reply

Your email address will not be published. Required fields are marked *