After all, the term Machine Learning was coined based on the way the human (or animal) brain learns, meaning that somehow, machines could also benefit from a similar kind of learning.
But human beings, successful ones for sure, know how to un-learn. In my case, while I was always fascinated by mathematics since my very early years, the school system’s training (as in training an algorithm in ML) failed on me. It failed not because I did not succeed at school (I ended up at Cambridge University) but because I was fed (the way an ML learning algorithm is fed with a training set) with the most boring, least valuable kind of mathematics when attending high school. Later on, during my academic years, I can say the same about the way I was trained to write academic articles: emphasis was on delivering esoteric content that few could read or leverage.
Over time, I have learned how to unlearn this training. Likewise, I believe that one big component of machine learning, should be unlearning. I can see many instances when this is necessary. Especially, when
- using the wrong data
- using wrong rules
- using wrong features
- using wrong model performance metrics
- failing to discover hidden data or features
How can an ML algorithm unlearn highly flawed training, and maybe beat its creator? Sure these algorithms self-correct and self-learn to some extent, but this capacity is limited, and does not seem to provide substantial improvements over time. Afterall, Facebook still distills fake news and accepts fake profiles after years of ML training. It seems like you still need a human being to un-train / re-train these algorithms, even basic ones, in any significant way, despite claims that data scientists will soon be replaced by robots (if you want my opinion on this, for the next 20 years, automated data scientists will just be zombies, but at the same time many people call themselves data scientists yet doing mundane work that can easily be automated, creating confusion about the term ‘data scientist’.)
Food for thought: how does an automated ML algorithm learn that it is not using the best data sets, the best features, and needs to change, un-learn, and adapt? I believe it is feasible, if you allow ML algorithms to search themselves for the right data to use, the right features to use, and so forth. Just like Google cars have to figure out by themselves how to best survive on an highway.
My 2 cents.
- Subscribe to our Newsletter
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- Hire a Data Scientist | Search DSC | Classifieds | Find a Job
- Post a Blog | Forum Questions