WTF is Async & Await in C# ?

Simply put they allow you to easily write asynchronous programs. Without you ever having to reorganize your code. Which can lead to massive performances increases.

The “async” & “Await” markers are keywords that mark the beginning and end of asynchronous code. Where “async” is put right before a function name, and “await” is put right before calling the function. However if a method is async then it needs to return a Task object.

Now you can use different parts of Task Asynchronous Programming (TAP) model. Such as start a bunch of tasks, and wait for them to finish. Or even call a new task on the completion of another task. All while your main application is running.

How is this possible ? Does it start a bunch of new threads ? Yes and No. If you start a bunch of tasks and wait for them to complete then yes. Where as if you await a heavy task it cuts up everything happening in our program the second it hits an await keyword. And starts executing everything based on the time available on the current thread. So you aren’t able to tell that your programming is waiting around for something.

Questions People have Asked Me – Part 2

What is the root object in the base class library ?

For Java and C# that would be “Object”.

What methods does “Object” have ?

For C#:

    • Equals – Supports comparisons between two objects
    • Finalize – Used to perform cleanup operations on un-managed resources being used by the object. Before the object is destroyed.
    •  GetHashCode – Generates a number based on the value of an object. Used to supported hash tables.
    • ToString – Create a human readable piece of text that describes the object.

Is “String” mutable ?

For C# & Java: Strings are always IMMUTABLE

What is Boxing and Un Boxing ?

For C#:

Process of converting a value type to the type of object or any interface type implemented by this value type. Like storing int in a object which would be “Boxing” (implicit aka do it with out thinking about it). And then taking that object, and “Un-Boxing” it explicitly. Example of this would be something like “int i = (int)x” where x is type of object. Why would you ever want to do this ? Well that’s cause value types get stored in the stack, whereas reference types get stored in the heap. So if your running into performance problems by having a lot of value types floating around in the stack. You can just dump them into the heap, by boxing them.

 

WTF is S.O.L.I.D – WTF is I ?

I = Interface Segregation Principle

This is all about separating your interfaces and making them smaller. Since we don’t want to end up having mega interfaces with a tons of properties, and methods, that may or may not be used by classes that implement them. If you don’t follow this principle you are probably gonna end up with a hair ball of OOP design. That will lead to harder refactors further down the line. For example lets say you have a “Entity” interface, that has properties “attackDamage”, “health”, and also has methods “move”, “attack”, and “takeDamage”.  Now lets say classes “Player”, “Enemy” , “Chair”, and “MagicChest” implement it. Does this make sense ? Should the “Chair” class need to implement the “attack” method ? Most likely no it should not, but then it does need the “name” property. So we can factor out the common piece among the classes that implement “Entity”. So instead of just having the “Entity” interface. We can have a “BaseEntity”, “ActorEntity” and “StaticObject” interface. This way we won’t have any unnecessary implementations for any of the classes that implement the interfaces.

WTF is S.O.L.I.D – WTF is O ?

O = Open/Closed Principle

This is all about being “open for extension but closed for modification”. Your code should be extendable. To the point where you don’t need to constantly change anything about it. This can come in many forms; for example instead of overloading a class with a number of different methods. Such as the “Person” class needing a method like “write a book” and also like “fight a fire”, or even “cook a five star meal”. Instead we could separate these into classes that all inherit from the “person” class. Which would allow use to write code to extend existing functionality. Without having us go in and make changes to the core logic of the person class.

Another example of this is when we are using a large “if – else” or “switch” statement. And we do things based on what input we get passed in. Instead of having this large “if -statement” we can refactor the logic back into the input we are are being passed in. If we had gotten a generic account class as an input. And we had to calculate their net-worth based on their account type. We should create classes for each account type, and store the calculation logic in them, rather then in the “if else” clauses.

How to get rid of duplicates in a table

How does one even know they have duplicates ? or to what extent ? You can use this SQL statement right here:

The key here is that we are using the “group by” clause to aggregate a bunch of data. And after that we are using the “having” filter clause, to add the filter “count(title) > 1” which is just saying “title found more then once”

Now that we have made sure that we actually do have duplicates, lets get down to deleting them.

The key here is that we are using aliases rather than joins or something else. So you can think of “dupes” and “fulltable” as variables. First we set the values of these variables, then we use the where clause to filter the things we are looking for. At this point if we just ran the query, we would just end up deleting our whole table. Therefor we have a final “and” where we specify that we only want to select the id that greater out of the two, rows that were in “dupes” and “fulltable”.

Docker – Cheat Sheet

The basic commands you need, to be productive with docker:

How do I get a list of all running docker containers ?

  • docker ps

How do I just get all the containers ?

  • docker ps -a

How do I remove a container ?

  • docker rm <container id or name>

How do I see all my images ?

  • docker images

How do I remove an image ?

  • docker rmi <name of image here>

How do I get an image on to my local machine ?

  • docker pull <name of image here>

How do I make a container and run it ?

  • docker run <image name>

How do I run & start a interactive shell  with my container ?

  • docker run -it <image name>

How do I map a port from my container to the outside ?

  • docker run -p <outside port>:<port inside docker container> <image name>

How do I get details about an image ?

  • docker inspect <image name>

How do I look at the logs coming out of a container ?

  • docker logs <container name>

How do I start up a container and leave it running, without it consuming a session ?

  • docker run -d <image name>

How do I build my application, and tag it ?

  • docker build -t <user-name>/<app-name> .

 

How I went about choosing a Deep Learning Framework

The following is a excerpt that was made, as part of my final capstone project.

Introduction

The hardware and software section will be primarily exploring the two key parts in the development of neural networks. Currently the two competing software libraries for the development of neural networks are PyTorch and Tensor Flow. And the two competing hardware platforms to train models is between AMD and Nvidia [6]. In this section I will explore the benefits and disadvantages of each.

Deep Learning Software & Hardware Selection

When looking into developing our model I identified the 2 key choices, software selection and hardware selection. I identified framework selection as a key choice since, it would act as the key building block in constructing the model, and effect how fast I could train them. Where as hardware selection was important since it would be the primary limiting factor in how fast I could train the model, and how complex I could make the model.

Software Selection

Due to the exponential expansion of machine learning (ML) research and computing power seen over the last decade. There has also been an explosion of new types of software infrastructure to harness it. This software has come from both academic and commercial sources. The need for this infrastructure arises from the fact that there needs to be a bridge betIen theory and application. When I looked at what Ire the most popular frameworks, I found it was a mix of strictly academic and commercial driven software. The four main frameworks Ire Caffe, Theano, Caffe2 + PyTorch, and Tensor Flow (TF).

When I went about choosing a framework, I considered three different factors, community, language, and performance. Community was one the biggest factors, since I had no real production experience in doing any sort of large scale ML modeling and deployment. The only framework that fulfilled this need was Google’s Tensor Flow. It had been released in 2015 and had been made available to the open source community. Leading to many academic researchers to contribute and influence its development. Which has resulted in many other companies using it in their production deep learning pipelines. The combination of both software developers and scientists using it has led to a lot of community driven development. This has lead to making it easier to use and deploy. A side effect of this large amount of adoption is the generation of detailed documentation. Written by the community, large amount of personal, and company blogs, detailing how they used TF to accomplish their goals. The only real competitor at the time of writing it this is Facebook’s Caffe 2 + PyTorch Libraries which was just open sourced early this year.

The other factor was the language interface it would use. I wanted an easy to use interface, with which to build out the model. When I looked at what was available, I found that all of the popular frameworks were written in C++ and CUDA, but had a easy to use Python based interface. The only framework out of the four mentioned above, that only had C++ based interface was Caffe.

The most important part of framework selection was the performance aspect. Most if not all ML research and production use cases happen on Nvidia GPU hardware. This is due to Nvidia’s development of their CUDA programming framework for use with their GPUs. It makes parallel programming for their GPUs incredibly easy. This parallelization is what lets the complex matrix operations be computed with incredible speed. There were only two frameworks out of the four I mentioned, that used the latest version of CUDA in its code base. Which were TF and Caffe 2 + PyTorch, however Caffe 2 + PyTorch was not as robust as Tensor Flow in supporting the different versions of CUDA.

In the end I choose to go with TF since it had a better community and CUDA support. I did not choose to go with its nearest competitor, since it was not as well documented, and its community was just starting to grow. Whereas TF has been thoroughly documented and has had large deployments outside of Google (such as at places like LinkedIn, Intel, IBM, and UBER). Another major selling point for TF is the fact that, it is free, continually getting new releases, and has become an industry standard tool.

Deep Learning Software Frame Works
Name Caffe Theano Caffe 2 + PyTorch Tensor Flow
Computational Graph Representation No Yes Yes Yes
Release Date 2013 2009 2017 + 2016 2015
Implementation language C++ Python & C C++ C++, JS, Swift
Wrapper languages N/A Python Python, C++ C, C++, Java, GO, Rust, Haskell, C#, Python
Mobile Enabled NO NO YES YES
Corporate Backing UC Berkeley University of Montreal Facebook Google
CUDA enabled NO YES YES YES
Multi GPU Support NO NO YES YES
Exportable Model YES NO YES & NO YES
Library of pretrained models YES NO YES YES
Unique Features Don’t need to code to define Network First to use CUDA and Computational Graph in Memory Uses the original developers of Caffe and Theano frameworks

 

VISDOM – Error function Visualization Tool

 

PoIrs Facebook ML

Tensor Board – Network Visualization and Optimization Tool

 

Developed by Google Deep Brain

 

 

PoIrs Google ML

Under Active Development No No Yes Yes
 

 

NOTE

The reason as to why PyTorch and Caffe 2 are always mentioned together is because they are meant to be used together. PyTorch is much more focused on research and flexibility. Where as Caffe 2 is more focused on production deployment and inference speed. Facebook’s researchers use PyTorch to prototype models, then translate the model into Caffe 2, using their model transfer tool known as ONIX.

Table 1 A summary of all information of note that I collected during my research