This is a popular question recently posted on Quora, with my answer viewed more than 8,000 times so far. I am re-posting it here. This post is much more detailed than my initial answer.
-------
My answer may appear sarcastic, after all, I am a math PhD and have published in journals such as Journal of Number Theory. But I left academia long ago, yet still doing what I think is ground-breaking research in math, indeed, in my opinion, superior to what I did during my postdoc years; but I believe many mathematicians would view my recent articles as “bad math”. In short, it is written in simple English, accessible to a large audience, free of complex proof, arcane theory, or esoteric formula. There is no way it could ever be published in any scientific journal (I haven’t tried, so I could be wrong on this, though it would require TONS of time that I don’t have, to format and edit to make it suitable for such publications.) I will go as far as to say that my research is essentially “anti-math”. You can see my most recent article here: New Decimal Systems - Great Sandbox for Data Scientists and Mathema....
Here is another example of “bad math” from me: the formula are correct, possibly discovered by me for the first time (I doubt, but you never know) but the simple way I “prove” them would be considered bad math.
To read this article, featuring these formula, with a proof that a college student in her first year could understand, click here. I read elsewhere that math research is now so specialized, that soon nobody will be able to add anything new. Some have said that proving theorems would soon be performed by AI, rather than humans. And in my proof of the above formula, I actually used an API to automatically compute some integrals.
Yet, in general, I disagree, and I believe that the "top priests" in the mathematical community are making it very hard for anyone else to be accepted and contribute. To the contrary, if you read my articles, they are easily accessible, and even discuss fundamental concepts with plenty of low hanging fruits for research (including business applications and state-of-the-art number theory) that "high priests" mathematicians are not interested in anymore.
Hopefully, I contribute to make the field of mathematics appealing to people who were turned off during their high school or college classes, due to the abundance of jargon, the lack of interesting applications, and the top-down approach (highly technical before describing potential applications in hard-to-understand language, which is the opposite of my approach.) At the same time, I never had to lower the quality or value of my research to make it accessible (and fun to read) to a large audience of non-experts. The only change is my writing style.
For related articles from the same author, click here or visit www.VincentGranville.com. Follow me on on LinkedIn.
DSC Resources
Comment
I believe that there is also a lack of vision in academic research, possibly due to agencies that decide on how to allocate funding (grants.) In my case, my research is cross-disciplinary: at the intersection of dynamical systems, number theory, statistics, probability theory, and computer science (after all, that's what data science is about!) If you try to get a grant for something like this, despite potential applications, you would probably get rejection letters that read like "this is not number theory" or "this is not computer science" or "this is not statistics" etc. because it is everything at the same time.
Also, when I write about the "randomness of digits in number representation systems", experts in probability theory roll they eyes, saying "digits are produced by deterministic algorithms, there is no randomness, and how can you put a statistical distribution on static, deterministic numbers, especially if you consider an infinite number of digits." So essentially, they have no interest in this; they consider it to be an heresy. This is great for everyone not in the Academia, because this is low hanging fruits that academic mathematicians are not interested in.
I totally approve you.
Most Universities have unfortunately lost all contacts with the (machine learning) field. There are plenty of interesting "low hanging" subjects that would require close inspection but these "so-called" top priest are lost in their "esoteric" researches.
I read one time that ML is the only field of science where :
1) ...most of the real big advances were performed by private companies.
2) ...most of the scientific publications are originating from private companies (to be more precise: only 30% of the scientific publications are from universities, whereas in the other field of science, University publication usually occupy more than 80% of the space). This is really surprising to me that private companies decide to publish and explain in all details what makes their algorithms (or software solution) much better compared to the solution of the other companies? It seems to me that any company disclosing such important information would directly lose a big part of his commercial advantages and intellectual property rights? Despite these obvious obstacles to any "open" publication, the scientific papers published by private companies still covers 70% of the scientific publications? How is that possible? Wow!
For me the only possible explanation for this strange fact/observation is that university professors know very, very little about the machine learning field. Indeed, if you look at the most recent kaggle or KDD competition, you won't find there any university professors? How is this possible? Are all universities full of "crooks" behaving "as if" they know something? (...and this is indeed what I observed in many places where I am going).
I am not saying that there does not exists a few university professors really contributing to the field (i.e. Trevor Hastie. Robert Tibshirani. Jerome Friedman, Ross Quinland really made major&great contributions) but these last years I am really struggling to find anything worth reading. It's "as if" everybody stop thinking and stop inventing new&better algorithms (to just do "ensemble" instead)? I don't like that at all! ..and don't talk to me about "deep learning"! ...Because this is just:
1) nothing new again: an old algorithm (ANN) that got "hyped" again
2) a terribly bad algorithm for the applications in which I am involved (churn & cross-selling)
Where are the bright minds? Are they lost?
I'll stop there otherwise everybody will think that I am just an old guy having a rant!! ;-)
See you!
Frank
© 2018 Data Science Central ® Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
You need to be a member of Data Science Central to add comments!
Join Data Science Central