.

A minimum viable learning framework for self-learning AI (machine learning and deep learning)

AI is a complex subject and hard to learn

Often, in the early stages, people make mistake such as 

a)  They try to learn everything

b)  They do not know in which order to learn 

c)  They go to deep into one subtopic initially

Hence, I created a minimum viable learning framework for self learning AI (machine learning and deep learning)

Because its concise and its minimal, it does not include topics like GANs. Reinforcement learning etc. tt also does not cover Bayesian approaches n detail

However, this syllabus should get you to about 80 percent of your journey for a typical data science role 

Statistics 

   Central limit theorem

   Sampling methods 

   Type i vs type ii error 

   Selection bias 

   Non gaussian distributions 

   Bias variance tradeoff 

   Confusion matrix 

   Normal distribution 

   Correlation 

   Covariance  

   Point estimates and confidence interval 

   a/b testing 

   p-value 

   re-sampling 

   Methods to overcome combat overfitting and underfitting 

   Treatment of outliers 

   Treatment of missing values 

   Confounding variables 

   Entropy and information gain 

   Cross validation 

Basic concepts 

  Between a validation set and a test set 

  Supervised learning 

  Unsupervised learning 

  Parameters v.s. hyperparameters 

  Cost function 

Regression 

  Linear regression 

  Assumptions required for linear regression 

  Limitations of linear regression 

Deep learning 

 What is the difference between machine learning and deep learning? 

 Basic working of Neural networks 

 Soft-max 

 Relu 

 Learning rate 

 Epoch / batch and iteration 

 The convolution operation 

 Layers of a CNN 

 Pooling operation 

 Kernels and Parameter sharing 

 Back propagation 

 Gradient descent 

 Vanishing gradients 

  Activation functions 

  LSTM 

  Models 

  Regression 

Classification

   logistic regression

   SVM

    Tree based

Clustering 

PCA / Dimensionality reduction 

MLP 

CNN 

Autoencoders 

Regularization 

  Lasso 

  Ridge 

  Regularization in deep learning (ex dropout) 

Ensemble methods 

  Boosting 

  Bagging 

Optimization techniques 

  Matrix optimization techniques (contrast to) 

  Gradient descent including specific optimizers like Adam etc 

  Back propagation 

Statistical inference 

   Models 

     Parametric models 

     Non-Parametric models 

  Paradigms 

     Frequentist 

     Bayesian 

  Statistical proposition/outcome 

     A point estimate 

     An interval estimate 

     A credible interval 

     rejection of a hypothesis 

     Clustering or classification of data points into groups. 

  Parameter Estimation techniques 

    Ordinary least square estimation 

    Maximum likelihood estimators 

Hyperparameter tuning techniques 

   Grid search 

    Random search 

Feature Engineering 

  Feature Extraction (ex PCA) 

   Feature Transformation (ex binning, log transforms) 

   Feature Selection (filter methods, wrapper methods etc) 

Model evaluation 

  Regression metrics 

     (R)MSE 

     MAE 

      R² 

  Classification metrics 

    Accuracy 

    Recall 

    Precision 

    F1 score 

    Confusion matrix

Hope you find it useful

 

Image source: Peggy Marco Pixabay

Views: 246

Tags: dsc_education, dsc_si

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Elina Fransis yesterday

Thanks for this detailed blog, I recommend your readers to do data science course in Learnbay they have all advanced module of Machine learning, deep learning and data analytics.

© 2021   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service