Imagine if GPT-4 could tell traders how to play on the stock market. For instance, using insider information gathered on the Internet, it could produce winning buy and sell signals, with target prices, for various stocks. Then, anyone could use simple English, or even an API to get the right predictions and trade automatically. Would it work? Of course not, because everyone would use the same arbitraging strategies. It is well known that the end result would be ROI dilution down to zero.
This is just an example. The goal in this article is to debunk a myth: the idea that AI can do everything right. And that soon, we will all lose our job. Even lawyers, doctors, engineers, and mathematicians. While jobs may disappear in large numbers, I see it as a consequence of automation and the move towards lean organizations. This trend will intensify with or without AI or GPT. So here, I focus only on questioning the ability of AI to do certain tasks better than humans. But first, let’s look on the potential impact on the job market.
AI and GPT Impact on Jobs
Will lawyers, teachers, artists, doctors, programmers, recruiters, real estate agents, mathematicians, bloggers, authors and journalists disappear? After all, they all create or gather information. Yet, the change happening now is the information revolution. Indeed, you can now automatically create content or summarize existing one to some extent.
In my case, I don’t need teachers to learn new stuff, or doctors to diagnose and fix health problems, or artists to create image to put in my articles. Also, I want to automate the production of some of my articles. Good candidates are lists of top books to read or top influencers to follow. And I am happy to have GenAI produce the images that I need. Even in mathematics, I don’t compute integrals or solve equations anymore: AI does it for me (tools such as Wolfram Alpha). As for coding, I re-use code from other programmers to a large extent. This is very similar to asking GPT to write your code. Indeed, I would fail any coding interview unless I can use GPT!
All this results in huge time and cost savings. Also, it allows me to focus on what AI can’t do for me.
The Hype and the Reality
When I worked on my PhD in the nineties, it was all about image processing and image remote sensing. Money was flowing freely. Then it sank into oblivion, only to resurface recently as computer vision. Computational statistics was a hot topic. No one probably remembers it, but it gave rise to modern machine learning, in the process creating the split between statistics and ML. There are good reasons why this happened, it’s not just hype. Now, these days, it is as if LLM is the only thing that matters. And if you don’t talk deep learning or transformers, no one will listen to you.
I could see 10 years from now LLM being remembered as Google search: a fantastic technology that had its heydays but did not reach its full potential due to monopoly. After all, Google was at the core of the Internet revolution. And you may say that GPT is the new search engine.
I call it fashion. Sure a lot of progress has been made. Even myself, I don’t code in Perl anymore, and I have embraced generative adversarial networks (GAN). For synthetic data, they work poorly, which is why I created NoGAN and better evaluation metrics (see here). But they had their success in computer vision.
There is no doubt that GPT and OpenAI offer solutions far superior to (say) Google search. The reason is the increasingly poor quality of the content available on the Internet. More on this in the next section. But I am convinced that in 10 years, few will still use the word LLM. It will drift like NLP, with the focus on the new fashion of the day. In the meanwhile, LLM will be a lot more mature and still powering many applications. In the same way that NLP or computational statistics did not exactly die, quite the contrary. LLM is based on both!
My $1m Challenge
Whenever I really need to do an Internet search, I can’t find answers anymore. GPT does a better job than Google, but still can’t answer my questions, no matter how I rephrase them. Sure, I ask research questions, for instance “what is the variance of the range for Gaussian distributions”. At this point, I believe that I will have to create my own tool. GPT works well for the general public, but has yet to produce specialized versions that serve niche audiences. Unless I miss something. Ten years ago, this was not the case. I actually had an answer to that specific question, but I can’t find it anymore. It is now buried, and I blame it to “the race to the bottom”. That is, the exponential growth of the Internet biased towards rudimentary content. But for advanced content, it has been shrinking at a fast pace.
Niche platforms such as Stack Exchange help, but in the end I could not find my answer there anymore. And anyway, both Google and GPT look at websites such as Stack Exchange, and like myself, are unable to retrieve anything meaningful. This was just one example, but all my questions fit in that category and remain unanswered.
Yet you hear time and again how AI can solve everything, and even prove theorems. For the latter, this is not something new. So I decided to offer $1m if I can get an answer to my new math question: in the first n binary digits of square root of 2, is it true that as n increases, the longest run of zeros is no more (asymptotically) than log2(n)?
I know nobody can answer that one. Not even my famous homonym Andrew Granville, or the team that would get the Clay Institute $1m award to solve the Riemann Hypothesis (I am working on it, see here, but not for the award). So here is my proposal: get AI, GPT-4 or any AI tool of your choice to solve it. Send me the answer: a hard mathematical proof or disproof. An independent team of top mathematicians will review your answer (I don’t trust AI to evaluate the solution), and decide if it is worth the $1m award. I have yet to formalize my proposal, but you can see the preliminary version here.
I work regularly on hard mathematical problems. Does AI help with this? Yes, to some extent. For instance, I use GenAI to create synthetic functions mimicking real ones, in the same way that companies use synthetic data for training set augmentation and enhanced classification or predictions. My routine computations or math derivations are all automated. But for the most part, it works the other way around: exploring these hard problems led to the construction of new AI algorithms. In the context of the Riemann Hypothesis, it led to denoising technology that automatically detects and isolates chaos, to do better predictions. In this case, the chaos was in the distribution of prime numbers, but it also applies to real life problems. Then, quantum derivatives is another byproduct of my research, with applications in Fintech.
Vincent Granville is a pioneering GenAI scientist and machine learning expert, co-founder of Data Science Central (acquired by a publicly traded company in 2020), Chief AI Scientist at MLTechniques.com, former VC-funded executive, author and patent owner — one related to LLM. Vincent’s past corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, and CNET.
Vincent is also a former post-doc at Cambridge University, and the National Institute of Statistical Sciences (NISS). He published in Journal of Number Theory, Journal of the Royal Statistical Society (Series B), and IEEE Transactions on Pattern Analysis and Machine Intelligence. He is the author of multiple books, including “Synthetic Data and Generative AI” (Elsevier, 2024). Vincent lives in Washington state, and enjoys doing research on stochastic processes, dynamical systems, experimental math and probabilistic number theory. He recently launched a GenAI certification program, offering state-of-the-art, enterprise grade projects to participants.