They say that the best ideas sometimes come to you while you are in the shower, and this idea of how to explain two important Neural Network concepts – Backpropagation and Stochastic Gradient Descent – actually did come to me as I was trying to set the perfect water temperature for my morning shower.
As I was struggling to adjust the two shower handles – one handle that controlled scolding hot and the other handle that controlled flash freezing – it occurred to me that I was a simple Neural Network (in spite of the “Snow Miser/Heat Miser” song running through my head). I was using Backpropagation to feed back the error between my desired water temperature versus the actual water temperature to my two handled Neural Network, and Stochastic Gradient Descent to determine how much to adjust the hyperparameters of those two handles.
While I don’t expect that the normal human will ever write their own Neural Network program, the more that you can understand how these advanced technologies work, the better prepared you will be to determine where and how to leverage AI, Machine Learning and Deep Learning technologies to uncover new sources of economic value with respect to customer, product and operational insights
Let’s review the basic concepts:
Figure 1: “Neural Networks: Is Meta-learning the New Black?”
The goal of the faucet Neural Network is to find my optimal water temperature by tuning the faucet (model) hyperparameters (weights and biases). I’ll use my hands to measure the error between actual and desired results and then backpropagate the error (error size and direction) back to the faucet neural network to tweak the hot and cold handles “weights and bias.” I will use the Neural Network concepts of Backpropagation to determine the error between expected versus actual model results and Stochastic Gradient Descent to determine how much to tweak the model parameters (faucets) in order to eliminate error (see Figure 2).
Figure 2: Bathroom Faucet Neural Network and Tweaking Hot and Cold Faucet Weights and Biases
My hand in Figure 3 determines or measures error – the size and direction of the error – between the expected versus actual model results: blazing hot (Heat Miser), mildly hot, slightly hot, slightly cold, mildly cold, chillingly cold (Snow Miser).
Figure 3: Measuring / Determining the Error Between Actual versus Optimal Results
I continue to measure the size and direction of the temperature error that gets backpropagated back to the faucet (model) using Stochastic Gradient Descent to determine how much to tweak the model parameters (faucet settings) until I get the perfect outcome/result (see Figure 4).
Figure 4: Backpropagating the Error Back to the Neural Network Model in Order to Tweak the Model Weights
While the actual mechanics of how a neural network works are much more complicated (lots of math and calculus), the basic concepts are really not that hard to understand. And the more that you can understand how these advanced technologies work, the better prepared you will be to determine where and how to leverage AI, Machine Learning and Deep Learning technologies to uncover new sources of economic value with respect to customer, product and operational insights. And maybe, just maybe, as you are uncovering those new sources of economic value, you can join me in a rousing chorus of “I’m Mr. Heat Miser…”
Source: “30 Famous Christmas Songs Lyrics”
For those interested, here is my video explaining the water faucet neural network (and yes, I know that I need to clean the grout).
Views: 903
Tags: #AI, #BigData, #DOBD, #DataAnalytics, #DataMonetization, #DataScience, #DeepLearning, #DesignThinking, #DigitalTransformation, #DigitalTwins, More…#Economics, #IIoT, #Innovation, #InternetOfThings, #IoT, #MachineLearning, #NeuralNetworks, #Smart, #SmartCity, #SmartSpaces, #ThinkLikeADataScientist
Comment
Thank John-Brian!
Love this analogue Bill... "My Faucet Lesson of Neural Networks in Greater Detail"
Script Kiddie: a person who uses existing computer scripts or code to hack into computers, lacking the expertise to write their own.
I learned a new term today. Thanks!
But y'know, I suppose it was a valuable lesson in how to be a script kiddie. I've been a script kiddie on a bunch of things since then, like computer audio programming, graphics and now deep learning.
LMAO!!!
We had to write a program to solve this program in freshman programming, circa 1979 at UC Santa Cruz.
Of course, we didn't know Runge-Kutta from a hole in the ground and just stole the formula.
© 2019 Data Science Central ®
Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles
You need to be a member of Data Science Central to add comments!
Join Data Science Central