In this article we will see about Deep Learning, how it is similar to human brains, how does it works and its application.
Today everyone in this world is fascinated by the words Artificial Intelligence, Machine Learning and Deep Learning, no matter what people are doing they want to get ahead in the market. In this digital era where we have vast amount of data on the internet we are getting dependent on the machines to analyze it.
Let’s say we want to predict the predict the price of a house. If I go back 50 years back, I would seek the professional having experience can tell the price giving certain parameters like area, number of bedrooms, distance etc. Now we can train a deep learning model which can easily predict the price of the house.
Sounds great! But What is Deep Learning?
Deep Learning — Definition
Deep Learning can be defined to be a powerful Machine Learning which learn and improve on its own by the computer algorithms. It works with artificial neural networks which is kind of similar to neurons in human brain.
How does Deep Learning similar to neurons in human brain?
If we see neurons in human brain it would look something like this,
Here there are many neurons interconnected to each other. If we pick one neuron and look closer to it.
A signal comes from senses (nose, eye, ear ) of our body through dendrites, travels through a long tail called axon and passes to another neurons through Axon terminals. This signal or information can travel through multiple neurons to get some action going.
Similarly, In deep learning we have some input nodes and multiple hidden layers of nodes through which information which are relevant passes to the output layer.
How does Deep Learning works?
As you can see in the above pic of Neural Networks we have 3 layers:
- Input Layer
- Hidden Layers
- Output Layer
Input Layer contains multiple independent variables like Area, distance, no. of bedrooms. All the edges connected to hidden layers have weight associated with it.
In hidden layers, based on which information is important for a neuron or node in the hidden layer we can take sum of products of weights and input signal. We apply activation function to this weighted sum of products.
We repeat the above process for all hidden layers. Then we finally sum it and calculates our output.
Activation functions are mathematical equations that determine the output of a neural network. The function is attached to each neuron in the network, and determines whether it should be activated or not, based on whether each neuron’s input is relevant for the model’s prediction.
There are many activation functions but most commonly used are:
- Threshold Function →(yes / no)
- Sigmoid Function → ([0,1])
- Rectifiers → (≥0)
- Hyperbolic Tangents →[-1, 1]
Applications of Deep Learning
- Self Driving Cars.
- News Aggregation and Fraud News Detection.
- Natural Language Processing.
- Virtual Assistants.
- Visual Recognition.
- Fraud Detection.