All about Recurrent Neural Network (RNN)

Mohamed Bakrey Mahmoud
5 min readJul 30, 2022

--

Introduction

In this article, we have a kind of new network in deep learning which is derived from the recurrent neural network where we have today a recurrent neural network (RNN) which is a type of neural network where the output from the previous step is fed as an input to the current step. In traditional neural networks, all inputs and outputs are independent of each other, but in cases such as when it is required to predict the next word of a sentence, the previous words are required and thus there is a need to remember the previous words. Thus, RNN appeared, which solved this problem with the help of the hidden layer. The main and most important feature of RNN is the Hidden state, which remembers some information about the sequence. Hence, the RNN has a large channel that remembers all the information about what was calculated. It uses the same parameters for each input because it performs the same task on all the inputs or hidden layers to produce the output. This reduces the complexity of the parameters, unlike other neural networks.

What is the RNN?

RNN is a type of neural network that can process recursive and sequential data, can recognize patterns, and predict the final output. It was called by this name the recurrent neural network because it works on repetition and the inputs are the end, meaning even the outputs. Hence, we can say that the recurrent neural network has an internal memory that allows it to remember and memorize the information that enters it, and this helps the system acquire context. So if you have sequential data like a time series, RNN will be well suited to processing that data. This cannot be done by CNN or Feed-Forward Neural Networks because ty cannot sort the relationship between previous and next inputs. Nowadays some companies in their popular products like the voice search we use from Google and Siri from Apple RNN to process the input from the users and predict the output.

How does The RNN work?

The logic behind RNN is to save the output of the given layer and return it to the input in order to predict the output of the layer. Below is a simple example of how to convert a Feed-Forward Neural Network into a Recurrent Neural Network (RNN).

If you look on the left side you will find the image, just following me in these steps represent the following:

  • X — is the input layer.
  • h — is the hidden layer and it holds the information for the previous output and feeds it back to itself.
  • y — is the output layer
  • A, B, and C are the parameters to improve the output of the model.

Here is an example of a fully connected Recurrent Neural Network.

If you look at the image above, recurrent neural networks will share the same weight factor within each layer of the network, while feed-forward networks tend to have different weights across each node of that top.

Why do we use recurrent neural networks?

The RNN was built because of some issues with the feed-forward neural network:

  • Sequential data cannot be processed
  • The only current input is considered
  • Previous entries cannot be saved

The solution to these issues is RNN. RNN can process sequential data, and accept current input data and previously received input. RNNs can save previous inputs due to their internal memory.

Types of Recurrent Neural Networks

There are four types of Recurrent Neural Networks:

  1. One to One
  2. One to Many
  3. Many to One
  4. Many to Many

One-to-One RNN

This type is known as a vanilla neural network. It is used for general machine learning problems, which have one input and one output.

One to many RNN

This type contains one entrance and several exits; we will see this in the picture below.

Many to Many RNN

This type contains much input and much output:

Applications that work on the basis of this network:

  • Text Generation
  • Machine Translation
  • Visual Search, Face detection, OCR
  • Speech recognition
  • Semantic Search
  • Sentiment Analysis
  • Anomaly Detection
  • Stock Price Forecasting

Advantages Of RNNs

  • The principal advantage of RNN over ANN is that RNN can model a collection of records (i.e. time collection) so that each pattern can be assumed to be dependent on previous ones.
  • Recurrent neural networks are even used with convolutional layers to extend the powerful pixel neighborhood.

Disadvantages of RNNs

  • Gradient exploding and vanishing problems.
  • Training an RNN is a completely tough task.
  • It cannot system very lengthy sequences if the usage of Tanh or Relu as an activation feature.

Conclusion

And here in this article, work has been done to explain and a special questionnaire for one of the networks that are working as something significant in the field of deep learning and artificial intelligence, as it works and is involved in many applications as we mentioned We see recurrent neural networks primarily as one of the modern artificial intelligence. It provides a stable foundation for AI software to be greener, more accessible, and most importantly, more convenient to use. However, the results of the work of the recurrent neural network show the actual cost of information in this day and age. They show how many things can be extracted from records and what that information can generate.

You can see the implementation of it in code here in this work.

Mohamed B Mahmoud. Data Scientist.

--

--

Mohamed Bakrey Mahmoud
Mohamed Bakrey Mahmoud

No responses yet