5 Projects to Improve Your Resume and Learn To Program

One of the best ways to get started with programming or help in finding a job is doing side projects. They help you understand what goes into making large feature rich applications.

This article will primarily focus on end user driven projects, i.e projects that anyone can use. This means all these projects will have some focus on building a UI interface.

Projects:

  1. A website that allows you to get a feed of all your favorite blogs/ YouTube channels.

    • You usually need to go to Medium, Dev.To, Hacker News, or individual blogs. why can’t there be a service where it just gives me everything I subscribe to in one big feed ?
  2. Task Management System.

    • Instead of using Trello, ToDo List, and Jira, why don’t you make your own mixture of the three ? And have it integration with your favorite calendar of choose.
  3. Job Board.

    • Looking for a job these days is pretty spammy. There are a multitude of job boards that have posting after posting, and offer no information other then the post. Most job boards have more features geared towards recruiters, and have non for job seekers. And most of the time you end up have to make a spread sheet to keep all your application organized. Make something that fixes these problems.
  4. Gift Recommendations.

    • Everyone needs some help looking for a gift. Make a site that scraps Amazon for items, then build a ML model to recommend them to users using affiliate links. Use the information from every recommendation to training the model.
  5. Apply Machine Learning to your domain of choice.

    • Are you a Pokemon card nerd ? or a gear head ? Why don’t you make a ML model that can clarify car parts or Pokemon cards , and then deploy it as an API. You might be wonder how you could possibly do that, and its easier then you think. just do the first few course of arguably one the greatest ML course ever:  https://course.fast.ai/index.html

 

Here is a list of things you will end up learning, or have experience in after completing any of the projects:

  1. Application Architecture:

    1. How to structure your code
    2. What Frameworks to use
    3. How data flows through the application
    4. What data structures to use
  2. UI – Design:

    1. How to plan out the interface of the application
    2. How to build for convince
    3. Getting better at CSS
  3. Database Integration:

    1. How to save data
    2. Work with data
    3. Integrate application logic with data logic
  4. Application Security:

    1. How to make user accounts
    2. Stop others from accessing another users content
    3. Save user passwords
    4. Understand basic attack vectors – SQL Injection , XSS, CSRF, etc
  5. Getting Better at your Frame Works of Choice

How I went about choosing a Deep Learning Framework

The following is a excerpt that was made, as part of my final capstone project.

Introduction

The hardware and software section will be primarily exploring the two key parts in the development of neural networks. Currently the two competing software libraries for the development of neural networks are PyTorch and Tensor Flow. And the two competing hardware platforms to train models is between AMD and Nvidia [6]. In this section I will explore the benefits and disadvantages of each.

Deep Learning Software & Hardware Selection

When looking into developing our model I identified the 2 key choices, software selection and hardware selection. I identified framework selection as a key choice since, it would act as the key building block in constructing the model, and effect how fast I could train them. Where as hardware selection was important since it would be the primary limiting factor in how fast I could train the model, and how complex I could make the model.

Software Selection

Due to the exponential expansion of machine learning (ML) research and computing power seen over the last decade. There has also been an explosion of new types of software infrastructure to harness it. This software has come from both academic and commercial sources. The need for this infrastructure arises from the fact that there needs to be a bridge betIen theory and application. When I looked at what Ire the most popular frameworks, I found it was a mix of strictly academic and commercial driven software. The four main frameworks Ire Caffe, Theano, Caffe2 + PyTorch, and Tensor Flow (TF).

When I went about choosing a framework, I considered three different factors, community, language, and performance. Community was one the biggest factors, since I had no real production experience in doing any sort of large scale ML modeling and deployment. The only framework that fulfilled this need was Google’s Tensor Flow. It had been released in 2015 and had been made available to the open source community. Leading to many academic researchers to contribute and influence its development. Which has resulted in many other companies using it in their production deep learning pipelines. The combination of both software developers and scientists using it has led to a lot of community driven development. This has lead to making it easier to use and deploy. A side effect of this large amount of adoption is the generation of detailed documentation. Written by the community, large amount of personal, and company blogs, detailing how they used TF to accomplish their goals. The only real competitor at the time of writing it this is Facebook’s Caffe 2 + PyTorch Libraries which was just open sourced early this year.

The other factor was the language interface it would use. I wanted an easy to use interface, with which to build out the model. When I looked at what was available, I found that all of the popular frameworks were written in C++ and CUDA, but had a easy to use Python based interface. The only framework out of the four mentioned above, that only had C++ based interface was Caffe.

The most important part of framework selection was the performance aspect. Most if not all ML research and production use cases happen on Nvidia GPU hardware. This is due to Nvidia’s development of their CUDA programming framework for use with their GPUs. It makes parallel programming for their GPUs incredibly easy. This parallelization is what lets the complex matrix operations be computed with incredible speed. There were only two frameworks out of the four I mentioned, that used the latest version of CUDA in its code base. Which were TF and Caffe 2 + PyTorch, however Caffe 2 + PyTorch was not as robust as Tensor Flow in supporting the different versions of CUDA.

In the end I choose to go with TF since it had a better community and CUDA support. I did not choose to go with its nearest competitor, since it was not as well documented, and its community was just starting to grow. Whereas TF has been thoroughly documented and has had large deployments outside of Google (such as at places like LinkedIn, Intel, IBM, and UBER). Another major selling point for TF is the fact that, it is free, continually getting new releases, and has become an industry standard tool.

Deep Learning Software Frame Works
Name Caffe Theano Caffe 2 + PyTorch Tensor Flow
Computational Graph Representation No Yes Yes Yes
Release Date 2013 2009 2017 + 2016 2015
Implementation language C++ Python & C C++ C++, JS, Swift
Wrapper languages N/A Python Python, C++ C, C++, Java, GO, Rust, Haskell, C#, Python
Mobile Enabled NO NO YES YES
Corporate Backing UC Berkeley University of Montreal Facebook Google
CUDA enabled NO YES YES YES
Multi GPU Support NO NO YES YES
Exportable Model YES NO YES & NO YES
Library of pretrained models YES NO YES YES
Unique Features Don’t need to code to define Network First to use CUDA and Computational Graph in Memory Uses the original developers of Caffe and Theano frameworks

 

VISDOM – Error function Visualization Tool

 

PoIrs Facebook ML

Tensor Board – Network Visualization and Optimization Tool

 

Developed by Google Deep Brain

 

 

PoIrs Google ML

Under Active Development No No Yes Yes
 

 

NOTE

The reason as to why PyTorch and Caffe 2 are always mentioned together is because they are meant to be used together. PyTorch is much more focused on research and flexibility. Where as Caffe 2 is more focused on production deployment and inference speed. Facebook’s researchers use PyTorch to prototype models, then translate the model into Caffe 2, using their model transfer tool known as ONIX.

Table 1 A summary of all information of note that I collected during my research

Project – Cognative Oncology Systems (COS)

Overview

COS is a Saas product designed to help Oncologists better diagnose and track tumours in their patients. This MVP is for my end of degree capstone project.

COS uses a trained neural network to do Image Segmentation on CT and MRI Scans.

Here is a link to the presentation that was made in 2018: https://docs.google.com/presentation/d/1jemo6qzxRQu7MUc8TgouUCiTZMz8LapDyJOfqdsiyLM/edit?usp=sharing

Technolgies Used

COS uses a number of open source and closed source technolgies:

  • ASPNET MVC
  • Flask
  • Razor Pages
  • JQuery
  • Tensor Flow
  • Docker
  • MS SQL
  • Python
  • C#

Hosted On:

  • Azure Web App as a Service
  • Azure VM (Ubuntu)

 

Project – Pins

Pins is a hyper causal game I made awhile ago. It revolves around trying to get all your pins stuck into a circling object above. The challenge comes in getting all the pins to stick, with out touching any of the other pins.

Here it is on both the iTunes and Google Play Store:

Here is a link to the Github repo: https://github.com/RedGhoul/Pins

Here are a few of the screen shoots:

 

Neural Network Types

I have been working on my capstone project for the last little bit. It involves using neural networks to solve the problem of segmenting medical images.

Here is a little of what I have learned. I think I am going to start doing a series about this. Mainly cause Machine learning is easier than it seems, and the more people that realize that, the more innovation that will happen 😁

* You wanna have a good understanding of a basic generic Neural networks, before reading on.

So lets get on with it. There are basically a couple of different types of neural network types, such as Generative Adversarial Networks (GAN), Convolutional Networks (CNN), and Recurrent Networks (RNN). Each have their own area and application where they work best. However they all generally use the same principles.

In GANs one part of the NN, is called the generator. This generator generates new data instances, while the other part, the discriminator, evaluates them for authenticity. The discriminator decides whether each instance of data it reviews belongs to the actual training dataset or not. The goal of the discriminator, when shown an instance from the real-world, is to recognize it as authentic.

Generalized Flow of GAN events as follows: The generator takes in random numbers and returns an image. The generated image is fed into the discriminator alongside a stream of images taken from the actual dataset. The discriminator then takes in both real and fake images and returns probabilities, a number between 0 and 1, with 1 representing a prediction of authenticity and 0 representing fake. Then it enters a double feedback loop. Where the discriminator is in a feedback loop with the ground truth, and the generator is in a feedback loop with the discriminator.

GAN_FlowChart

CNNs for SIS are similar to ordinary GANs in the sense that they are made up of two main parts. The first part is known as the encoder, which is responsible for extracting the features of the image. And the second part is known as the decoder, which is responsible for decoding the image.

CNN_FlowChart

The encoding part of CNNs are stacks of Convolutional (C), Activation (A), and Pooling (P) Layers. In the convolutional layer, filters are passed along the image taking dot products to create feature maps. These result of this gets passed through an activation layer. If the first CA layer gets done too many times the feature maps start to degrade, therefor P layers are used. These P layers average out the values in the feature map, which helps perceive the keys features. Most CNN architectures have several CAP stacks before getting put into the decoding aspect of the CNN.

CNN_Filter

The decoding part of the NN goes through the inverse operations of the encoder. Since by the time the feature maps reach the decoder they have been significantly compressed. The CAP layers in the decoding portion go through the process of deconvolution and up sampling using max pooling.

RNNs are a type of NNs where connections between units form a directed graph along a sequence. This allows it to exhibit dynamic temporal behavior for a time sequence. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.

Recurrent networks are distinguished from feedforward networks by that feedback loop connected to their past decisions, ingesting their own outputs moment after moment as input. It is often said that recurrent networks have memory. Adding memory to neural networks has a purpose: There is information in the sequence itself, and recurrent nets use it to perform tasks that feedforward networks can’t.

That sequential information is preserved in the recurrent network’s hidden state, which manages to span many time steps as it cascades forward to affect the processing of each new example. It is finding correlations between events separated by many moments, and these correlations are called “long-term dependencies”. This is because an event downstream in time depends upon, and is a function of, one or more events that came before. One way to think about RNNs is this: they are a way to share weights over time.

“?” in Angular 5

As you an Angular component has a life cycle, much life us humans. In the beginning we are nothing, we do a bunch in the middle and then we die 🙂 However if you want to call a certain service in your “ngOnInit” to dump some values into a var that appears in your html, your going to have a bad time. Or at least angular is by throwing a lot of errors in your console, I mean it will still work.

To get rid of these errors all you have to do it the following add a “?” at the end of the end of the var that’s in the html, and ta-da errors gone. This works since the “?” tells Angular to chill out there will be a value you there “eventually”. Hmm… but why eventually you may ask ? Its because two functions before “ngOnInit” get called , the very first one being the constructor of the class and the second being the “ngOnChanges” method. So for the execution of the first two methods its asking “WTF where is this thing in the html in the .ts file ?”  which makes throw errors.

Using Unreal Again

After taking a long hiatus from Unreal 4, cause I didn’t like blue prints (Unreal 4’s Visual Scripting Language) I preferred real programming (C++) . At which point there started to be a nice long 3 second compile time 😦

I switched to Blue Prints ! I bought my self a course and started going at it every day after work ( well … almost every day … West World doesn’t watch it’s self )  I made the final project in the course, plus some of my own additions (these additions mainly making the game look AAA, and adding UI) .

So this post is dedicated to saying “Hey look I made this cool thing 🙂 ! ” mostly by my self.

Its you basic point and click ARPG that only has one level, and uses alot of the Infinity Blade assets that Epic has given away for free. Here are some screen shots:

 

UPDATE: 2018-08-18:

I plan on adding a better AI to the game using Unreal’s Behavior Tree