Should you switch to .Net Core 5 ?

.Net Core 3.1 is in LTS mode (ends support in Dec 2022) where as .Net Core 5 is coming to general availability in November. However, there will never be a LTS version of .Net Core 5. They have a new release life. Every odd release just gets a general availability release, rather than an LTS release.

LTSLevelCycleNetCore

Pros of .Net Core 5

  • Unification of runtimes (mono, …)
  • Performance gains
  • Language interoperability (swift, java, …)
  • Not a lot of breaking changes

Cons of .Net Core 5

  • No LTS release
  • Fundamental packages like Xunit, Swashbuckler, Serilog, and etc, have not transitioned over yet.
  • Still a lot of un knowns in terms of bugs, and stability.
  • No Default Base Image provided my Microsoft yet, everything remains experimental

Recommendation: I think everyone that can should switch to .Net 6 some time after it comes out. Which is in Nov 2021, this would gave you a whole year to migrate to 6, while 3.1 is still being supported.

References

https://dotnet.microsoft.com/platform/support/policy/dotnet-core

https://www.stevejgordon.co.uk/upgrading-from-asp-net-core-3-1-to-5-0-preview-1

How to make your program faster, regardless of programming language or hardware

Person A: Does this sound like a impossible task ?

Person B: No not really

Person A: Does it have limits ?

Person B: Yes it does, but you see dramatic difference in speed regardless.

Person A: This sounds like a scam. Is it a scam ? How much will this cost ?

Person B: No its not a scam, and its free.

Person A: So what is it ?

Person B: You just have to master runtime complexity 😊

Person A: What !? …. 😒

Person B: Yeah I know, I thought that too. But it works 😁

Person A: Yeahhhh well I never really understood that sort of stuff. I just implement businesses logic for living. I don’t know much about computer science. And I am not gonna waste my time on this. If I need speed, I’ll just deploy it on a bigger server 😒

Person A: Come on … Its easy. You don’t have to understand the computer science. You just need to understand graphs, and recognize patterns.

Person B: What … really ?

Person A: Yeah you just got to use the graph below. Or just Google something like “Runtime Complexity Graph” . All you got to know that things in Red are the danger zone, things in Orange are the “meh” zone, and things in the Yellowish Green zone are okay.

Person B: Wait what does this have to do with programming, and what about those function things ?

Person A: Oh yeah right. So those function things represent how your program can run. The “n” represents the input to your function, like an array objects, or numbers. In the danger zone, if you add just one more element, your time to completion more then doubles. Whereas in the Yellowish Green zone adding another element doesn’t do much of anything.

Person B: 😡 This still doesn’t help me.

Person A: Okay okay okay, how about I make cheat sheet for you ? Your little guide to spotting when your in the danger zone ?

Person B: Show me.

Person A:

Type FunctionDescription
Constant Time1No matter how many elements/ inputs you give your function. Its runtime will always stay the same.
Logarithmic TimeLog(n)When doubling the number of input/elements into your function does not double the runtime. Also this is the runtime of most search algorithms.
Linear TimenWhen doubling the number of inputs/elements doubles your runtime. This is a for loop spanning from zero to the end of the input.
n + mTwo for loops one after the other, going over two different collections.
Quasi-linear Timen *Log(n)A worse version of Log(n). This is the runtime of most sorting algorithms.
Quadratic Timen2Every element in an array is compared with every other element in the input. This is the classic double “for loop” over a single array. For every nested “for loop” you add one more to the exponent. So if you had 5 nested for loops, you would have n5 .
n * mTwo nested for loops, but going over two different collections.
Exponential Time 2nA single extra input doubles runtime. You never want this, ever.

WTF is Async & Await in C# ?

Simply put they allow you to easily write asynchronous programs. Without you ever having to reorganize your code. Which can lead to massive performances increases.

The “async” & “Await” markers are keywords that mark the beginning and end of asynchronous code. Where “async” is put right before a function name, and “await” is put right before calling the function. However if a method is async then it needs to return a Task object.

Now you can use different parts of Task Asynchronous Programming (TAP) model. Such as start a bunch of tasks, and wait for them to finish. Or even call a new task on the completion of another task. All while your main application is running.

How is this possible ? Does it start a bunch of new threads ? Yes and No. If you start a bunch of tasks and wait for them to complete then yes. Where as if you await a heavy task it cuts up everything happening in our program the second it hits an await keyword. And starts executing everything based on the time available on the current thread. So you aren’t able to tell that your programming is waiting around for something.

WTF is S.O.L.I.D – WTF is D ?

D = Dependency Inversion Principle

This principle is about making sure you never have to rewrite your core logic. Meaning that if your class or piece of code has a dependency on something else. It should never access it directly. Instead it should go through some intermediary that abstracts the functionality away.

For example if your application talked to a database, you would’t want to be writing SQL statements directly into your code. Or if you were using a ORM (Object Relational Mapper) you would want queries every where. Especially if at some point you decide to move to another database type or ORM. To fix this problem you would need to create a wrapper around it, abstracting complex queries into simple common method calls. Like “Update User profile”, or “Set User Password”.

This way if you ever had to make any changes to the logic of how you accessed the database. You could do it with out changing any of your core application logic. Since your core application wouldn’t directly rely on how the method is implemented. This can also be thought of as always coding to a interface, rather then a direct implementation.

WTF is S.O.L.I.D – WTF is L ?

L = Liskov Substitution Principle

This is all about using Object Oriented Programming to its fullest. So what the Wikipedia article says is that: “If S is a sub-type of T then objects of type T maybe replaced with type S”. So what does that mean ?

Well it means that when you create your class hierarchy, and you create your base methods, you have to think about the broader implications. For example if your root parent class was “Bird” it would have methods like  “Fly” , “Eat” and “Walk”. And then you would classes like “Hawk” , “Blue Jay”, “Robin”, “Penguin”, and “Ostrich”. Now we should be able to put any of these child classes in place of the parent class, and use them. Can you see the problem ?

The problem is that Penguins and Ostriches can’t fly, which violates the “Liskov Substitution Principle”. You can get around this by instead having the two different children inherit from the “Bird” class: “FlyingBird” and “NonFlyingBird”.

WTF is S.O.L.I.D – WTF is S ?

In this series I am going to be going through each of the principles. Go about explaining them in as simple of a manner as possible.

S = Single Responsibility Principle

Anything inside your code that is parts (class, modules, etc) should only ever have one reason to change. For example if you had a Person class, then everything in that class should only do things related to person. A person class should have methods like “eat”, “sleep”, “play”, and etc. However a person should never need to have a “log” method, cause it has nothing to do with a person.

 

Docker – Cheat Sheet

The basic commands you need, to be productive with docker:

How do I get a list of all running docker containers ?

  • docker ps

How do I just get all the containers ?

  • docker ps -a

How do I remove a container ?

  • docker rm <container id or name>

How do I see all my images ?

  • docker images

How do I remove an image ?

  • docker rmi <name of image here>

How do I get an image on to my local machine ?

  • docker pull <name of image here>

How do I make a container and run it ?

  • docker run <image name>

How do I run & start a interactive shell  with my container ?

  • docker run -it <image name>

How do I map a port from my container to the outside ?

  • docker run -p <outside port>:<port inside docker container> <image name>

How do I get details about an image ?

  • docker inspect <image name>

How do I look at the logs coming out of a container ?

  • docker logs <container name>

How do I start up a container and leave it running, without it consuming a session ?

  • docker run -d <image name>

How do I build my application, and tag it ?

  • docker build -t <user-name>/<app-name> .

 

4 Reasons why you should choose React for your next project !

1) Don’t Touch the DOM – Imperative Vs Declarative

Hey don’t touch the DOM ! React will do that for you. You might be wondering what I mean by that. You see React shields you from manually interacting (Imperative)  with the DOM using Java Script. Instead React uses something called the “Virtual DOM”. The Virtual DOM is a representation of the actual DOM that React uses to figure out what to actually change on the screen. React will always be like “Oh you made that change ? Let me handle the best way that update“. So when you use React you are using a Declarative way of programming.

2) Hair Balls Vs Lego – Component Architecture

Front end Apps these days are pretty complex, for example take Netflix, Facebook, or AirBnB. They all have complex user interactions and require a large number of cross interacting entities. If you tried to build any one of these applications using vanilla Java-script, and then tried to add more and more features to them, you would end up with a large hair ball of code. That would feature large chunks of CSS, JS, and HTML all over the place. And when anyone ever tried to add a new feature, they would have to copy and past code (the horror !). Components to the rescue ! With components you can encapsulated each and every aspect of your app into little manageable chunks, that can be imported into other parts of your application. Making building new features as easy as building with Lego.

3) One Way Data Flow

Every application has “State“. This can be anything from how many times you clicked a button, to your current permissions inside the app. Now in traditional front end apps this state is spread around different chunks of the app. And are not shareable across different areas of the app. For example if you had a news editor app, it would have to know who has permissions to actually publish the content, and give the user some categories to publish it under.

React deals with this problem by enforcing a centralized state. Once the “State” of the app changes, React automatically reacts and makes the necessary DOM changes to reflect the change. With this centralization you minimize the places where potential bugs show up. And you have a better idea where the bugs are in your application. Since data only flows one through your application, from your “State” to the components.

4) Just the UI

React is a library for building UIs, plain and simple. It does not try to be a large massive framework that has every single feature you could want. Therefor you can just add the extra features you want as third party libraries. This way your app becomes highly customized towards your use, rather than you needing to bend a framework to work the way you want it too.

 

5 Projects to Improve Your Resume and Learn To Program

One of the best ways to get started with programming or help in finding a job is doing side projects. They help you understand what goes into making large feature rich applications.

This article will primarily focus on end user driven projects, i.e projects that anyone can use. This means all these projects will have some focus on building a UI interface.

Projects:

  1. A website that allows you to get a feed of all your favorite blogs/ YouTube channels.

    • You usually need to go to Medium, Dev.To, Hacker News, or individual blogs. why can’t there be a service where it just gives me everything I subscribe to in one big feed ?
  2. Task Management System.

    • Instead of using Trello, ToDo List, and Jira, why don’t you make your own mixture of the three ? And have it integration with your favorite calendar of choose.
  3. Job Board.

    • Looking for a job these days is pretty spammy. There are a multitude of job boards that have posting after posting, and offer no information other then the post. Most job boards have more features geared towards recruiters, and have non for job seekers. And most of the time you end up have to make a spread sheet to keep all your application organized. Make something that fixes these problems.
  4. Gift Recommendations.

    • Everyone needs some help looking for a gift. Make a site that scraps Amazon for items, then build a ML model to recommend them to users using affiliate links. Use the information from every recommendation to training the model.
  5. Apply Machine Learning to your domain of choice.

    • Are you a Pokemon card nerd ? or a gear head ? Why don’t you make a ML model that can clarify car parts or Pokemon cards , and then deploy it as an API. You might be wonder how you could possibly do that, and its easier then you think. just do the first few course of arguably one the greatest ML course ever:  https://course.fast.ai/index.html

 

Here is a list of things you will end up learning, or have experience in after completing any of the projects:

  1. Application Architecture:

    1. How to structure your code
    2. What Frameworks to use
    3. How data flows through the application
    4. What data structures to use
  2. UI – Design:

    1. How to plan out the interface of the application
    2. How to build for convince
    3. Getting better at CSS
  3. Database Integration:

    1. How to save data
    2. Work with data
    3. Integrate application logic with data logic
  4. Application Security:

    1. How to make user accounts
    2. Stop others from accessing another users content
    3. Save user passwords
    4. Understand basic attack vectors – SQL Injection , XSS, CSRF, etc
  5. Getting Better at your Frame Works of Choice

How I went about choosing a Deep Learning Framework

The following is a excerpt that was made, as part of my final capstone project.

Introduction

The hardware and software section will be primarily exploring the two key parts in the development of neural networks. Currently the two competing software libraries for the development of neural networks are PyTorch and Tensor Flow. And the two competing hardware platforms to train models is between AMD and Nvidia [6]. In this section I will explore the benefits and disadvantages of each.

Deep Learning Software & Hardware Selection

When looking into developing our model I identified the 2 key choices, software selection and hardware selection. I identified framework selection as a key choice since, it would act as the key building block in constructing the model, and effect how fast I could train them. Where as hardware selection was important since it would be the primary limiting factor in how fast I could train the model, and how complex I could make the model.

Software Selection

Due to the exponential expansion of machine learning (ML) research and computing power seen over the last decade. There has also been an explosion of new types of software infrastructure to harness it. This software has come from both academic and commercial sources. The need for this infrastructure arises from the fact that there needs to be a bridge betIen theory and application. When I looked at what Ire the most popular frameworks, I found it was a mix of strictly academic and commercial driven software. The four main frameworks Ire Caffe, Theano, Caffe2 + PyTorch, and Tensor Flow (TF).

When I went about choosing a framework, I considered three different factors, community, language, and performance. Community was one the biggest factors, since I had no real production experience in doing any sort of large scale ML modeling and deployment. The only framework that fulfilled this need was Google’s Tensor Flow. It had been released in 2015 and had been made available to the open source community. Leading to many academic researchers to contribute and influence its development. Which has resulted in many other companies using it in their production deep learning pipelines. The combination of both software developers and scientists using it has led to a lot of community driven development. This has lead to making it easier to use and deploy. A side effect of this large amount of adoption is the generation of detailed documentation. Written by the community, large amount of personal, and company blogs, detailing how they used TF to accomplish their goals. The only real competitor at the time of writing it this is Facebook’s Caffe 2 + PyTorch Libraries which was just open sourced early this year.

The other factor was the language interface it would use. I wanted an easy to use interface, with which to build out the model. When I looked at what was available, I found that all of the popular frameworks were written in C++ and CUDA, but had a easy to use Python based interface. The only framework out of the four mentioned above, that only had C++ based interface was Caffe.

The most important part of framework selection was the performance aspect. Most if not all ML research and production use cases happen on Nvidia GPU hardware. This is due to Nvidia’s development of their CUDA programming framework for use with their GPUs. It makes parallel programming for their GPUs incredibly easy. This parallelization is what lets the complex matrix operations be computed with incredible speed. There were only two frameworks out of the four I mentioned, that used the latest version of CUDA in its code base. Which were TF and Caffe 2 + PyTorch, however Caffe 2 + PyTorch was not as robust as Tensor Flow in supporting the different versions of CUDA.

In the end I choose to go with TF since it had a better community and CUDA support. I did not choose to go with its nearest competitor, since it was not as well documented, and its community was just starting to grow. Whereas TF has been thoroughly documented and has had large deployments outside of Google (such as at places like LinkedIn, Intel, IBM, and UBER). Another major selling point for TF is the fact that, it is free, continually getting new releases, and has become an industry standard tool.

Deep Learning Software Frame Works
Name Caffe Theano Caffe 2 + PyTorch Tensor Flow
Computational Graph Representation No Yes Yes Yes
Release Date 2013 2009 2017 + 2016 2015
Implementation language C++ Python & C C++ C++, JS, Swift
Wrapper languages N/A Python Python, C++ C, C++, Java, GO, Rust, Haskell, C#, Python
Mobile Enabled NO NO YES YES
Corporate Backing UC Berkeley University of Montreal Facebook Google
CUDA enabled NO YES YES YES
Multi GPU Support NO NO YES YES
Exportable Model YES NO YES & NO YES
Library of pretrained models YES NO YES YES
Unique Features Don’t need to code to define Network First to use CUDA and Computational Graph in Memory Uses the original developers of Caffe and Theano frameworks

 

VISDOM – Error function Visualization Tool

 

PoIrs Facebook ML

Tensor Board – Network Visualization and Optimization Tool

 

Developed by Google Deep Brain

 

 

PoIrs Google ML

Under Active Development No No Yes Yes
 

 

NOTE

The reason as to why PyTorch and Caffe 2 are always mentioned together is because they are meant to be used together. PyTorch is much more focused on research and flexibility. Where as Caffe 2 is more focused on production deployment and inference speed. Facebook’s researchers use PyTorch to prototype models, then translate the model into Caffe 2, using their model transfer tool known as ONIX.

Table 1 A summary of all information of note that I collected during my research