*Guest blog post.*

After reading many blog posts, articles and books, I have collected ingredients of data science! Moreover, I've classified them with a purpose of easily making a cook named as data science with below lists for whom wants to construct own career road map! Maybe, you wonder "why I am not giving the recipes of it" is because I do not have any real life experience.

*Creative mix of basic ingredients - not unlike data science - to produce delicious curried black spaghetti*

Firstly, we most probably have encountered a person with pure data warehousing or machine learning background identifying them as data scientist. This is because their areas highly nested with ds. But, what about other ds, mostly, related areas:

**Data science related areas**

- Statistics
- Social Science
- Database Querying
- Data Warehousing
- Data Mining
- Machine Learning
- Bio - Informatics
- Business/Data Analyst

If we look more closely at any of these areas, we'll be more likely to grasp basics of ds, and to meet core activities done on data scientists' daily life. For this purpose, let's look at general tasks done on data mining:

**General data mining tasks **

- Classification
- Similarity Model
- Profiling
- Data Reduction
- Causal Model
- Regression
- Clustering
- Co - Occurrence Grouping
- Link Prediction

Until now, I assume I have, implicitly, give some intuition to you, and I'm sure you have already seen a writings about `What is Data Science?`. However, outside the definitions done on any sources, let's take a look at ingredients, skills, of it. While doing this, don't forget the dynamics of life; that is what I have listed below is likely to change over time.

**Skills and technical knowledge required**

- Hacking Skill / Analytic Thinking
- Presentation Skill + Story Telling + Communication Skill
- Strategy Expertise
- Mathematics + Statistics + Computer Science Background
- Machine Learning' s , Statistics', or Mathematics' Data Modeling Skill
- Exploratory Data Analysis: Interactive Data Visualization, Feature Selection & Extraction, EDA Done On Current Data Types (Traditional e.g. Numerical, Categorical, Binary; Text e.g. Email, Tweet, Article; Record e.g. Timestamped Event Data, JSON / BSON Formatted Files; Geographic Based Location Data; Sensor Data, e.g. RFID / RF Tags Data; Online/Offline Network; Images
- Data Mining Algorithm Skill
- Performance Visualization & Common
**Test Methods**: Performance Metrics & Tools, Trade Off (e.g. Bias - Variance, Detection Error), Matrix (e.g. Contingency, Cost - Benefit), Evaluation Metrics (e.g. FP Rate, TP Rate, F-Measure, Sensitivity, Recall, Precision, etc), Methods for Estimating Generalization Error (Hold-Out, Cross Validation, Random Sampling, Bootstrap, K-fold / Progressive Cross Validation),**Graphs**(Lift Curve, Learning Graph, ROC & AUC, Calibration Graph, Fitting Graph, Loss Comparison Graph, Precision - Recall Graph, PN Graph, Cost Curve),Tests (Density Estimate Test), Causality Tests (Simpson's Paradox, A/B Test, Observing Data Distribution, Robin Causal Model, Causal - Effect Test, Observational Study) - Skill Set From DS Related Areas According To Interest such as Bio - Informatics

**Trends**

- Areas (Virtualization, Social Computing e.g. Social Network Analysis, Big Data, Mobile Computing, Parallel & Distributed Computing)
- Technologies (Hadoop & Map Reduce e.g. HDFS, Hive, Pig, Zookeeper)
- Databases (Relational DB e.g. PostgreSQL, Document DB e.g. MongoDB, Key-Value DB e.g. RIAK, Columnar DB e.g. HBase, Graph DB e.g. Neo4j, Spatial DB e.g. PostGis)
- Computer Programming Skills (R, Octave/Matlab, Java e.g. Java PL / Json / Bson, Python, Unix Shell Skills, NoSQL, Social Network Analysis e.g. iGraph, XNetwork, Pejak, NetLogo)

As an additional part, I believe "how you start and end your activities" will highly correlate with quality of your work. Therefore, before putting hand on real data science job, it seems to be armed with theoretical background on below key, common, points be rescue many life from playing around undeployable applications. Here, I'm willing to light on common pitfalls raised from not considering;

**Data Analysis Phase**: Base Line Performance - Sparseness - Unconsciously Done Stratification - Curse of Dimensionality - Relative Importance of Features - Measurement Errors - Correlated Features - Missing Data Values - Data Leakage - Feature Selection & Extraction (Selection Algorithm e.g. Forward, Backward, Hybrid; Selection Criteria e.g. R Squared, P Value, AIC, BIC, Entropy; Sample / Population: Aware Of Ways Of "How To Take Sample", Avoiding Biases In Sample, Awaring of "How Much Data For Sample Enough To Learn Features")**Model Evoluation Phase**: Real-Time Computational Complexity of Scalable Data; Aware Of "Preferences Change Over Time"; Cost of Model Update; Real-Time Auto Tuning Capability; Enabling To Take More, Better, Faster Machine Learning, Statistical, or Mathematical; Model Experimentation On Data**Model Performance Visualization Phase**: Sensitivity snd Limitations of Evaluation Metrics; Bias; Variance Tradeoff / Induction; Deduction Tradeoff; Determine "What Part of Pipeline Should Be Spent Most Of Time For Pipelined - Complex System?" e.g Ceiling Analysis, Ablative Analysis

Even if reaching to end of the post, we still need a set of practices and techniques in order to be gradually close to deployable solution of our planned product. In twitter, data science blogs or associations, they somehow agree on Agile as data scientists' methodology. I think one of the reasons is embedded cycling bahavior of DS. By cycling behavior, I mean we start from basic model, doing experiences, tuning knobs, and then analyzing test results. Afterwards, we continue with decision done on experiences and test results until deployable product. As a last point, here Agile methodologies are;

- Scrum
- Unified Process
- Agile Model
- Disciplined Agile Delivery

As a final word, I have done this research to clear my career road map and put light on each dark corner. It helps me a lot, and I wish it will help you, too. However, I admit this blog post needs more attention than what I've spend. So, please feel free to comment.

**DSC Resources**

- Career: Training | Books | Cheat Sheet | Apprenticeship | Certification | Salary Surveys | Jobs
- Knowledge: Research | Competitions | Webinars | Our Book | Members Only | Search DSC
- Buzz: Business News | Announcements | Events | RSS Feeds
- Misc: Top Links | Code Snippets | External Resources | Best Blogs | Subscribe | For Bloggers

**Additional Reading**

- 50 Articles about Hadoop and Related Topics
- 10 Modern Statistical Concepts Discovered by Data Scientists
- Top data science keywords on DSC
- 4 easy steps to becoming a data scientist
- 13 New Trends in Big Data and Data Science
- 22 tips for better data science
- Data Science Compared to 16 Analytic Disciplines
- How to detect spurious correlations, and how to find the real ones
- 17 short tutorials all data scientists should read (and practice)
- 10 types of data scientists
- 66 job interview questions for data scientists
- High versus low-level data science

Follow us on Twitter: @DataScienceCtrl | @AnalyticBridge

© 2020 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Upcoming DSC Webinar**

**Your Model Will Probably Fail (And How to Prevent it)**- July 9

Data science is more popular than ever, but many data scientists struggle with complicated workflows to run their models as well as how to best communicate the output to less technical stakeholders. Tableau can solve both of these challenges by designing R workflows and creating visualizations that break complicated models down into easily understandable stories.**Register today**.

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Statistics -- New Foundations, Toolbox, and Machine Learning Recipes
- Book: Classification and Regression In a Weekend - With Python
- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Upcoming DSC Webinar**

**Your Model Will Probably Fail (And How to Prevent it)**- July 9

Data science is more popular than ever, but many data scientists struggle with complicated workflows to run their models as well as how to best communicate the output to less technical stakeholders. Tableau can solve both of these challenges by designing R workflows and creating visualizations that break complicated models down into easily understandable stories.**Register today**.

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central