Neural Network Types

I have been working on my capstone project for the last little bit. It involves using neural networks to solve the problem of segmenting medical images.

Here is a little of what I have learned. I think I am going to start doing a series about this. Mainly cause Machine learning is easier than it seems, and the more people that realize that, the more innovation that will happen 😁

* You wanna have a good understanding of a basic generic Neural networks, before reading on.

So lets get on with it. There are basically a couple of different types of neural network types, such as Generative Adversarial Networks (GAN), Convolutional Networks (CNN), and Recurrent Networks (RNN). Each have their own area and application where they work best. However they all generally use the same principles.

In GANs one part of the NN, is called the generator. This generator generates new data instances, while the other part, the discriminator, evaluates them for authenticity. The discriminator decides whether each instance of data it reviews belongs to the actual training dataset or not. The goal of the discriminator, when shown an instance from the real-world, is to recognize it as authentic.

Generalized Flow of GAN events as follows: The generator takes in random numbers and returns an image. The generated image is fed into the discriminator alongside a stream of images taken from the actual dataset. The discriminator then takes in both real and fake images and returns probabilities, a number between 0 and 1, with 1 representing a prediction of authenticity and 0 representing fake. Then it enters a double feedback loop. Where the discriminator is in a feedback loop with the ground truth, and the generator is in a feedback loop with the discriminator.

GAN_FlowChart

CNNs for SIS are similar to ordinary GANs in the sense that they are made up of two main parts. The first part is known as the encoder, which is responsible for extracting the features of the image. And the second part is known as the decoder, which is responsible for decoding the image.

CNN_FlowChart

The encoding part of CNNs are stacks of Convolutional (C), Activation (A), and Pooling (P) Layers. In the convolutional layer, filters are passed along the image taking dot products to create feature maps. These result of this gets passed through an activation layer. If the first CA layer gets done too many times the feature maps start to degrade, therefor P layers are used. These P layers average out the values in the feature map, which helps perceive the keys features. Most CNN architectures have several CAP stacks before getting put into the decoding aspect of the CNN.

CNN_Filter

The decoding part of the NN goes through the inverse operations of the encoder. Since by the time the feature maps reach the decoder they have been significantly compressed. The CAP layers in the decoding portion go through the process of deconvolution and up sampling using max pooling.

RNNs are a type of NNs where connections between units form a directed graph along a sequence. This allows it to exhibit dynamic temporal behavior for a time sequence. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.

Recurrent networks are distinguished from feedforward networks by that feedback loop connected to their past decisions, ingesting their own outputs moment after moment as input. It is often said that recurrent networks have memory. Adding memory to neural networks has a purpose: There is information in the sequence itself, and recurrent nets use it to perform tasks that feedforward networks can’t.

That sequential information is preserved in the recurrent network’s hidden state, which manages to span many time steps as it cascades forward to affect the processing of each new example. It is finding correlations between events separated by many moments, and these correlations are called “long-term dependencies”. This is because an event downstream in time depends upon, and is a function of, one or more events that came before. One way to think about RNNs is this: they are a way to share weights over time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s