**Introduction**

Artificial neural networks are based on collection of connected nodes, and are designed to identify the patterns. They are part of deep learning, in which computer systems learn to recognize patterns and perform tasks, by analyzing training examples. For example an object recognition system can be fed to thousands of labeled images of houses, cars, traffic signals, animals etc. and would recognize visual patterns in the images so that it can consistently correlate with defined labels. These are inspired by biological neural networks of our brain. A neural network is modeled loosely like human brain and can consist of millions of simple processing nodes, called perceptrons which are densely interconnected. An individual node may be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data. Each node can take multiple inputs, process it and transmit the output to the neurons in next layers. The connections are also called as edges. Nodes and the edges have weights, which adjusts the strength of the signal at a connection. When the network is active, the node receive different number / signal over each of its connections and multiplies by associated weight. The output (aggregate signal) of each node is calculated by non-linear function of the sum of its inputs. If the output is below a threshold value, node does not pass data to the next layer. However, if the output exceeds the threshold value, the node pass the data to all the outgoing connections of next layer.

Initially, when the neural network is trained, weights and thresholds are set to some random values. Here, training data is fed to the input layer which passes to the succeeding hidden layers, gets multiplied and transformed at each node and added together in complex ways until it reaches the output layer, where the final predicted output is compared with the expected output and the error is calculated. During the training, thresholds and weights at each nodes are continuously adjusted until training data yields consistently expected outputs. In modern days, neural network algorithms are emerging as a new artificial intelligence technique that can be applied to real-time problems.

**Neural Network Architecture**

A neural network is composed of input layer (leftmost layer), the neurons within input layers are called input neurons. The rightmost layer is the output layer consists of output neurons. In the figure below, output layer consists of single neuron. The middle layers, are also called hidden layers. Below figure consists of two hidden layers. Such neural networks consisting of multiple layers are also called multi layer perceptrons or MLPs.

Now, let’s explore, how computation on each node. Node is loosely patterned to a neuron of human brain. It computes the input data with a set of weights or coefficients which either amplify or suppress that input, based on the task, algorithm is trying to handle. The summation of product of inputs and weights are passed through node’s activation function. The output signal based on its value in comparison to the threshold value is decided whether it should be passed and to what extent that signal should progress further through the neural network to impact the ultimate outcome. If the signal passed through the node, it indicates that specific node is activated.

A layer is a row of multiple such nodes or neurons like switches which turn on or off as the input is passed through the neural network. Each layer’s output is the input to the subsequent layer. Pairing of the adjustable weights along with input features determines significance to those features with regard to how the neural network classifies and clusters input. Below is the framework of artificial neural networks (ANN) -

**Types of Neural Networks**

There are multiple types of neural networks, which use different principles in determining their own rules and learn the patterns. Each of them has their own unique strengths.

Above figure depicts various types of neural networks.

**Summary –**

Neural networks represents very powerful techniques of AI, as they start with blank state and find their way through to a precise model. Neural networks are effective, but complex in their approach to modeling, as it can’t make assumptions on financial dependencies between input and output. The best part of the neural networks is , they are designed in a way which is similar to the biological neurons in a human brain. Hence, they are designed are learn faster and identify complex patterns much more accurately among huge data and it’s performance improves with more data and usage. Hence, neural networks are the fundamental framework on which critical artificial intelligence (AI) systems are built.

**Bibliography**

Nguyen, H., Bui, X.-N., Bui, H.-B., & Mai, N.-L. (2020). A comparative study of artificial neural networks in predicting blast-induced air-blast overpressure at Deo Nai open-pit coal mine, Vietnam. Neural Computing and Applications, 32(8), 3939–3955.

Orimoloye, L. O., Sung, M.-C., Ma, T., & Johnson, J. E. V. (2020). Comparing the effectiveness of deep feedforward neural networks and shallow architectures for predicting stock price indices. Expert Systems with Applications, 139, 112828.

Sherstinsky, A. (2020). Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network. Physica D: Nonlinear Phenomena, 404, 132306.

Yao, P., Wu, H., Gao, B., Tang, J., Zhang, Q., Zhang, W., Yang, J. J., & Qian, H. (2020). Fully hardware-implemented memristor convolutional neural network. Nature, 577(7792), 641–646.

Views: 1836

Tags: Artificial, Data, Deep, Intelligence, Learning, Machine, Networks, Neural, Science

© 2020 TechTarget, Inc. Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central