In a recent article (February 2019) published in Forkes (see here) it was argued that there will be no data science job titles by 2029. The author wrote that Automation is coming for many tasks data scientists perform, including machine learning.
I disagree. If you haven’t automated most of your tasks yet, you are not doing data science, you are not a data scientist — you are an overpaid data cruncher. Probably, for the non-initiated, data science still means performing long mundane tasks such as data cleaning, data processing, model or feature selection, and testing, that eat 80% of your time. But real data science is about automating these tasks and designing robust black-box and fast algorithms to take care of these chores, among many other things.
Also data science was hot already 30 years ago when I started my PhD. It was called computational statistics back then, and even included AI systems to identify patterns in images. You can see the data science projects I was working on, at least 25 years ago, long before the term data science was coined, here.
So no, data science is not new nor about to die. The data, techniques, and computer power are evolving though, and so does the level of automation, the domains of application (for instance urban planning, agriculture, astronomy, real estate, dating industry, and health care recently) and the added value when done properly by a qualified team. I still do data science today, but actually I am the one making old business models obsolete (in digital publishing, as an entrepreneur) rather than these companies not willing to hire in 10 years from now. True, they won’t hire me, but that is because they will have died, partly because of me, not the other way around. In short, it’s not that the jobs will disappear, but instead, that the companies lacking this level of automation that sound data science offers, will be the one to die.
Also history repeats itself in circles. For instance, an obscure, theoretical paper that I published in 1994 (see here) has suddenly, over the last 12 months, gained a considerable amount of traction in AI circles, based on the number of recent citations. The title is Simulated Annealing: A Proof of Convergence. At that time (1994) I was using the simulated annealing algorithm in the context of image deblurring and signal processing to design better automated noise filtering systems. With IoT sensor data and big messy data, signal processing and pattern recognition (which I consider to be data science) are again becoming popular.
- Why You Should be a Data Science Generalist – and How to Become One
- Becoming a Billionaire Data Scientist vs Struggling to Get a $100k Job – What is the difference?
- Is a PhD helpful for a data science career?
- If data science is in demand, why is it so hard to get a job?
- Why do people with no experience want to become data scientists?
- Why is Becoming a Data Scientist so Difficult?
- Full Stack Data Scientist: The Elusive Unicorn and Data Hacker
- Statistical Significance and p-Values Take Another Blow
- Are data science or stats curricula in US too specialized?
- How do you identify an actual data scientist?
- Is it still possible today to become a self-taught data scientist?
To not miss this type of content in the future, subscribe to our newsletter. For related articles from the same author, click here or visit www.VincentGranville.com. Follow me on on LinkedIn, or visit my old web page here.
- Book and Resources for DSC Members
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions