Home » Uncategorized

An Empirical Proof of the Riemann Conjecture

The correct term should be heuristic proof. It is not a formal proof from a mathematical point of view, but strong arguments based on empirical evidence. It is noteworthy enough that I decided to publish it. In this article I go straight to the point without discussing the concepts in details. The goal is to offer a quick overview so that the busy reader can get a pretty good idea of the method. Also, it constitutes a great introduction to the Python MPmath library for scientific computing, dealing with advanced and complex mathematical functions. Indeed, I hesitated for a long time between choosing the current title, and “Introduction to the MPmath Python Library for Scientific Programming”.

The Generalized Riemann Hypothesis

The Generalized Riemann Hypothesis or conjecture (GRH) states the following. A certain type of complex functions L(s, χ) don’t have roots when the real part of the argument s is strictly between 0.5 and 1. Here χ is a parameter called character, and s = σ + it is the argument. Its real part is σ. The notation may look awkward. But it is well established. I use it not to confuse mathematicians. The character χ is a multiplicative function defined on positive integers. I focus on χ4, the main Dirichlet character modulo 4:

  • If p is a prime number and p – 1 is a multiple of 4, then χ4(p) = 1
  • If p is a prime number and p – 3 is a multiple of 4, then χ4(p) = -1
  • If p = 2, χ4(p) = 0

These functions have a Euler product:

L(s,\chi) = \prod_p \Bigg(1 - \frac{\chi(p)}{p^{s}}\Bigg)^{-1},

where the product is over all primes. The core of the problem is the convergence of the product when the real part of s (the symbol σ) satisfies σ ≤ 1. If σ > 1, convergence is absolute and thus the function L has no root based on the Euler product. If convergence is not absolute there could be invisible roots, “hidden” behind the product formula. This happens when σ = 0.5.

Where it Gets Very Interesting

The primes p alternate somewhat randomly between χ4(p) = +1 and χ4(p) = -1, in equal proportions (50/50) when you consider all of them. This is a consequence of Dirichlet’s theorem. However those with χ4(p) = -1 get a very strong head-start, a fact known as Chebyshev’s bias.

The idea is to rearrange the factors in the Euler product so that if χ4(p) = +1, the next factor has χ4(p) = -1. And conversely, with as little changes as possible. I call the resulting product the scrambled product. You may remember your math teacher saying that you can not change the order of the terms in a series unless you have absolute convergence. This is true here too. Indeed, this is the very core of the issue.

Assuming the operation is legit, you group each pair of consecutive factors, (1 – p-s) and (1 + q-s), into one factor. When p is very large, the corresponding q is very close to p so that (1 – p-s)(1 + q-s) is very close to (1 – p-2s). For instance, if p = 4,999,961, then q = 4,995,923.

The Magic Trick

Assuming the p and subsequent q = p + Δp are close enough when p is very large, the scrambling and grouping operations turn the product into one that converges absolutely when σ ( the real part of s) is strictly greater than 0.5. As a result, there is no root if σ > 0.5. Even though there are infinitely many when σ = 0.5, where convergence of the product is uncertain. In the latter case one may use the analytic continuation to compute L. Et voila!

It all boils down to whether Δp is small enough compared to p, when p is large. To this day no one knows, thus GRH remains unproved. Yet you can use the Euler product to compute L(s, χ4) not only when σ > 1 of course, but also when σ > 0.5. You can do it with the Python code below. It is inefficient, there are much faster methods, but it works! In mathematical circles, I was told that such computations are “non-legit” because no one knows the convergence status. Knowing the convergence status is equivalent to solving GRH. Yet if you play with the code, you’ll see that the convergence is “obvious”. At least when t is not too large, σ is not too close to 0.5, and you use many millions of prime numbers in the product.

There is one caveat. You can use the same approach for various Dirichlet-L functions L(s, χ), not just for χ = χ4. But there is one χ for which the method does not apply: when it is a constant equal to 1, and thus not alternating. That χ corresponds to the classic Riemann zeta function ζ(s). Even though the method won’t work with the most famous case, obtaining a formal proof just for χ4 would turn you instantly into the most famous mathematician of all times. However, the most recent attempts to prove GRH avoid the direct approach (going through the factoring) but instead focus on other statements equivalent to or implying GRH. See my article on the topic, here. For the roots of L(s, χ4), see here.

Python Code with MPmath Library

I computed L(s, χ) and various related functions using different formulas. The goal is to test whether the Euler product converges as expected to the correct value when 0.5 < σ < 1. The code is also on my GitHub repository, here.

import matplotlib.pyplot as plt
import mpmath
import numpy as np
from primePy import primes

m =  150000
p1 = []
p3 = []
p  = []
cnt1 = 0
cnt3 = 0
cnt  = 0
for k in range(m):
    if primes.check(k) and k>1:
        if k % 4 == 1:
            cnt1 += 1
            cnt += 1
        elif k % 4 ==3:
            cnt3 += 1
            cnt += 1

cnt1 = len(p1)
cnt3 = len(p3)
n = min(cnt1, cnt3)
max = min(p1[n-1],p3[n-1])


sigma = 0.95
t_0 = 6.0209489046975965 # 0.5 + t_0*i is a root of DL4

DL4 = []
imag = []
print("------ MPmath library")
for t in np.arange(0,1,0.25):
    f = mpmath.dirichlet(complex(sigma,t), [0, 1, 0, -1]) 
    r = np.sqrt(f.real**2 + f.imag**2)
    print("%8.5f %8.5f %8.5f" % (t,f.real,f.imag))

print("------ scrambled product")  
for t in np.arange(0,1,0.25):
    prod = 1.0
    for k in range(n):
        num1 = 1 - mpmath.power(1/p1[k],complex(sigma,t))
        num3 = 1 + mpmath.power(1/p3[k],complex(sigma,t))
        prod *= (num1 * num3)
    prod = 1/prod
    print("%8.5f %8.5f %8.5f" % (t,prod.real,prod.imag))

DL4_bis = []
print("------ scrambled swapped") 
for t in np.arange(0,1,0.25):
    prod = 1.0
    for k in range(n):
        num1 = 1 + mpmath.power(1/p1[k],complex(sigma,t))
        num3 = 1 - mpmath.power(1/p3[k],complex(sigma,t))
        prod *= (num1 * num3)
    prod = 1/prod
    print("%8.5f %8.5f %8.5f" % (t,prod.real,prod.imag))

print("------ compare zeta with DL4 * DL4_bis")
for i in range(len(DL4)):
    t = imag[i]
    if t == 0 and sigma == 0.5:
        print("%8.5f" % (t))
        zeta = mpmath.zeta(complex(2*sigma,2*t))
        prod = DL4[i] * DL4_bis[i] / (1 - 2**(-complex(2*sigma,2*t))) 
        print("%8.5f %8.5f %8.5f %8.5f %8.5f" % (t,zeta.real,zeta.imag,prod.real,prod.imag))

print("------ correct product")
for t in np.arange(0,1,0.25):
    prod = 1.0
    chi = 0
    k = 0
    while p[k] <= max:
        pp = p[k]
        if pp % 4 == 1:
            chi = 1
        elif pp % 4 == 3:
            chi = -1
        num = 1 - chi * mpmath.power(1/pp,complex(sigma,t))
        prod *= num
        k = k+1
    prod = 1/prod
    print("%8.5f %8.5f %8.5f" % (t,prod.real,prod.imag))

print("------ series")
for t in np.arange(0,1,0.25):
    sum = 0.0
    flag = 1
    k = 0
    while 2*k + 1 <= 10000:
        num = flag * mpmath.power(1/(2*k+1),complex(sigma,t))
        sum = sum + num
        flag = -flag
        k = k + 1
    print("%8.5f %8.5f %8.5f" % (t,sum.real,sum.imag))

About the Author


Vincent Granville is a pioneering data scientist and machine learning expert, founder of MLTechniques.com and co-founder of Data Science Central (acquired by  TechTarget in 2020), former VC-funded executive, author and patent owner. Vincent’s past corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, CNET, InfoSpace. Vincent is also a former post-doc at Cambridge University, and the National Institute of Statistical Sciences (NISS).  

Vincent published in Journal of Number TheoryJournal of the Royal Statistical Society (Series B), and IEEE Transactions on Pattern Analysis and Machine Intelligence. He is also the author of “Intuitive Machine Learning and Explainable AI”, available here. He lives  in Washington state, and enjoys doing research on stochastic processes, dynamical systems, experimental math and probabilistic number theory.