I have been recently asked (by a theoretical computer scientist): “What is the role of Statistics in the development of Theoretical Data Science?”
I think this is a fundamental question. If you have any opinion on this please feel free to share with us. Here's my 2 cents:
Theory of [Efficient] Computing: A branch of Theoretical Computer Science that deals with how quickly one can solve (compute) a given algorithm. The critical task is to analyze algorithms carefully based on their performance characteristics to make it computationally efficient.
Theory of Unified Algorithms: An emerging branch of Theoretical Statistics that deals with how efficiently one can represent a large class of diverse algorithms using a single unified semantics. The critical task is to put together different “mini-algorithms” into a coherent master algorithm.
For overall development of Data Science, we need both ANALYSIS + SYNTHESIS. However, it is also important to bear in mind the distinction between the two.
Find original post here
This may not fully answer your question, but I have written quite a bit recently, about the role of data science in theoretical statistics, mostly in the context of dynamical systems, stochastic processes, and probabilistic number theory. You can find my articles here. Applications include financial markets, cryptography, and random number generation.
Some are dealing with high precision computing, especially how to identify when an iterative algorithm starts generating wrong output due to cumulative round-off errors. It has a statistical component, see here.
Vincent, Thanks for bringing this to my attention.
Best
Deep
Vincent Granville said:
This may not fully answer your question, but I have written quite a bit recently, about the role of data science in theoretical statistics, mostly in the context of dynamical systems, stochastic processes, and probabilistic number theory. You can find my articles here. Applications include financial markets, cryptography, and random number generation.
Some are dealing with high precision computing, especially how to identify when an iterative algorithm starts generating wrong output due to cumulative round-off errors. It has a statistical component, see here.
© 2020 Data Science Central ® Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles