Subscribe to DSC Newsletter

We all know that exponential functions grow faster than polynomials. Let us consider the following function: f(n) = n^a ⋅ (log n)^b ⋅ (log log n)^c ⋅ (log log log n)^d⋯ where the leading coefficient is positive.

I think anything that is "slowly growing" has this type of asymptotic expansion. In short, this type of representation is "complete": there is nothing between n and n⋅ log n other than a member of the above class, for instance n ⋅ (log n)^1/2 ⋅ (log log log n)^5 / (log log n)^3.

Is my statement true? How can one make it rigorous, assuming it is correct? Also,

  • What would be the slowest growing function that grows faster than the fastest growing function in the above class?
  • Provide an example of function growing faster than the fastest growing function in that class, but more slowly than exp n.
  • What happens if you allow the coefficients  not to be bounded, and the sequence to be infinite? For instance consider f(n) = n ⋅ (log n)^2 ⋅ (log log n)^4 ⋅ (log log log n)^8⋯

In short, is there some kind of topological framework that handles manipulations over these functions? Indeed, they are not functions, but asymptotic representations or quantities instead. It's a different type of mathematical objects, with its own arithmetic and topology.

Views: 298

Reply to This

Replies to This Discussion

Some readers noticed that you can't have an infinite sequence of embedded logarithms. Instead we could use the absolute value of the logarithm:  |log n|, |log|log n||, |log|log|log n||| and so on. But then there are other issues. Below is the plot of the function defined by f(n + 1) = | log f(n) | with f(1) = 2, for n between 1 and 14,000. 

Finally, n log*n and n a(n) are strictly between n and n log n. Log* is the log star function, while a is the inverse of the Ackerman function. However, these functions do not belong to the general family discussed in my question. See also here

Yeah, taking absolutes won't do anything useful. Repeating logarithms takes you to wildly unpredictable negative territory, absoluting it and then logarithming again makes for some randomish values, so it's probably a chaotic dynamic system.  Feigenbaum's constant and Julia sets and all that.  If I weren't lazy, I'd try it in Python.

I actually spent a bit of time on it. Taking the absolute value leads to an ergodic system, chaotic for sure, but very stable from a probabilistic point of view, with a smooth equilibrium distribution, like the logistic map, but unbounded. The equilibrium distribution is solution of the stochastic integral equation F(x) = F(|log x|) where the unknown is the probability distribution (F denotes its cdf.) More on this here

Reminds me of way back when, just for fun, I tapped a number into a calculator, then LOG then x2.  Repeat forever.  I built up a distribution, plotting it on a Tektronix 4051.  I might have a brittle old sheet of thermal paper with the plot colored in with a turquoise marker, in some old file.  

This has me thinking. A long time ago, as an undergrad in physics, I got curious about iterated exponentials, exp_a(x) such that exp_0(x)=x, exp_1(x)=e^x, and exp_a(exp_b(x)) = exp_(a+b)(x) .  Is there a "half exponential" function h(x) such that h(h(x)) = e^x?   I found out there is.  There is a corresponding half-log, H(x) such that H(H(x)) = log(x).  Turns out there is an infinite set of solutions, but most are just stupid unless you add require something special such as smoothness (defined in a certain sense) or a special symmetry.   I found nothing in the literature (at the time) about their asymptotic growths.

Being a physicist, I wonder if h(x) or its inverse have any use in describing physical reality?  Not that I know of.  Maybe in asymptotic growth?  Well, maybe.  How fully general is your n^a ⋅ (log n)^b ⋅ (log log n)^c ⋅... ?  It's like forming a general wavefunction as a linear combination of basis states, but not knowing if your basis state set is complete.  

The trick to making progress with the iterated exponential is to define the Abel function.  A(exp(x)) = A(x)+1.  It's a sort of "meta-logarithm" the way ordinary logarithms can turn multiplying into addition, making slide rules possible (ah, good ol' days.)   It is trivial to see that A( exp(exp(x)) ) = A(x) + 2.   Thus, h(x) = exp_(1/2)(x).  With  A(log(x)) = A(x)-1. 

So, we could write (log log log(n))  = V(A(n)-3)   where i'm defining V the inverse function to A.  You want that to some power d?  And multiply a bunch of  similar expressions?  Seems natural to look at the logarithm of the whole thing, log(f(n)), now a sum of terms.  

log( (log log log n)^d ) = d*log(V(u -3))  where we define u = A(n).  Hmm, let's define a function Q(z) = log(V(z)).  So now, this triple log factor becomes:  d * Q(u-3).  More generally, 

log(f(n))  = SUM(over some set of non-neg integers i)  c_i * Q(u-i).

A convolution of Q with a finite set of dirac functions with arbitrary coefs.  Is this progress?  I hope I didn't goof up in some embarrasingly stupid way; my head is a bit stuffy with a cold today.

There is, of course, the problem that too many logs in a row takes you to negative values, no matter how large n is, we can't have i get too large.  In practice n is only millions, billions, ... on the order of quintillions at best.  So we have this weird Q function, duplicated and shifted over by one or two or three steps, and linearly combined.  I find it hard to imagine this leading to a wide range of functions sufficient to cover whatever function space is relevant.  How would half-logs fit in?  f(n) = n * H(x)?  I imagine we'd need a Q(u) shifted by a half step.  I imagine a continuous range for i, from 0 to  "a few", an arbitrary continuous function of coefficients (corresponding to the powers in your n^a ⋅ (log n)^b ⋅ (log log n)^c ⋅ ... formula, and integrate over i.  But maybe there are reasons that won't work.  

So, my educated guess is: no, your product of multiple logarithms is not fully general; it has an infinity of cracks in which to sneak some funny functions.  I hope someone with better mathematical skills can make all this more rigorous. I am, after all, a physicist. We subtract infinity from infinity and get numbers that match experiment to 12 digits  :P

For fun, consider what happens to exp_a(x) when a is very close to zero.  Or, very close to 1, or to -1.  Can anything in physics or computational complexity theory be described with that sort of asymptotic growth, as such or as part of an O(n) formula?   I dunno.  It's time for a beer.

Back when I started the iterated exponetial research, I wrote a short, bad paper and presented it at some undergrad conferences. I've since rewritten it, finished it just last year, with glorious plots made with numpy and matplotlib, and put the Python code and the paper as PDF on my Github, https://github.com/darenw/FRITEXP   Happy reading, and I'm interested in comments and critique.

I've seen O(n) with log n, of course.  I've seen log log n.  It comes up in image processing and various algorithms. But I don't recall seeing the triple log come up in practice.  Are there any curiously elegant examples of how that might arise in data analysis?

RSS

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service