.Net: C#

Ok, so I’ve talked about the .Net Framework and .Net Core now I think it’s time we have a bit of an introduction to a .Net language, specifically C# (pronounced ‘C Sharp’).

C# is a member of the C family of languages, this includes C, C++, Java, Objective-C, and many others. It is commonly referred to as a general-purpose, type-safe, object-oriented programming language. Let’s quickly break this down:

  • General-Purpose: Means the language has been designed for application in many different domains.
  • Type-Safe: Means behaviours within our programs are well defined.
  • Object-oriented: Means we create our programs around the concept of real-world objects. See this post.

The core syntax for C# is very similar to that of Java but it is not a clone, at least not directly. C# was primarily modelled after Visual Basic (another .Net Language) and C++ (developed by Bjarne Stroustrup.

C#, however, gives us a lot of what other languages do, including overloading, overriding, structs, enums, and delegates (callbacks), but also gives use features more commonly found in functional languages, like F# and LISP, such as anonymous types and lambda expressions. All of which I will be writing about in the future.

The intention of the language was to be a simple yet modern language which provides portability and deployment to different types of environments. With the release of .Net Core, I’d say it is certainly living up to expectations.

The latest stable release of C# is currently at 7.3 as of May 2018 and the latest Preview release is at 8.0. Keep an eye out for a post that gives an overview of the changes provided to us at each release.

I won’t go into too much detail for this post as I want to start writing smaller posts targeting specific parts of the C# language.


This is very different from my normal posts and I won’t go on for long, but it’s related to something I’m getting more and more interested in and concerned about… the environment and what we are doing to it.

Not sure how popular it is but I came across a search engine called Ecosia and they seem to be doing some really interesting and admirable things. Their main goal is to provide search capabilities and use the revenue generated to reduce and actually improve our carbon footprint.

It appears their primary means of doing this is by planting trees and at the time of writing they say they’ve planted over 55.6 million.

They provide their financials as monthly reports on their site so everyone can see where the money they generate is spent

They run their servers on renewable energy and by planting the trees are actually removing CO2 from the atmosphere.

Check them out and see what you think. I’m going to start doing my searches through them and hopefully it works out.

Let me know if you use it and how you found the experience (though would probably give them a fair few searches first)…

.Net: Core

So we’ve looked at the .Net Framework which was limited to Windows, now lets check out .Net Core…

.Net Core was announced by Microsoft in June, 2016 and showed a new path they were creating the .Net ecosystem, and while this new direction was based on C# and the .Net Framework it was quite different.

Two of the biggest changes for the .Net ecosystem was it would be cross-platform and released across Windows, macOS, and Linux at the same time, in contrast to the .Net Framework only ever having been available on Windows. The second change was that .Net Core was going to be Open-Source, that is COMPLETELY Open-Source, meaning the code is available for us to review so we can get a better understanding of what is actually happening. But also the community is encouraged to actually get involved by way of pull requests and the like. This means we actually get our eyes on the Framework and documentation, raise issues such as bug reports, fix bugs, recommend or implement improvements, and add additional functionality. Not bad right?

The origin of .Net Core came from developers requesting that ASP.NET ran on multiple platforms and not just Windows. From there we got ASP.NET 5 and Entity Framework 7, which to be honest was more than a little confusing considering we had .Net Framework 4.6 and Entity Framework 6 you’d expect them to be the next releases right? Not so, so we got .Net Core, ASP.NET Core, and Entity Framework Core to show the distinction between the two ecosystems.

Since the announcement in 2016, we’ve seen the release of .Net Core 1.0, .Net Core 1.1 released in March 2017, .Net Core 2.0 released in August 2017, and .Net Core announced in May 2018 and we’re currently at .Net Core 3 Preview 3 as of March 2019. So we’re making some decent progress and getting more and more functionality with each release.
But what is driving these releases? Microsoft is developing their .Net offering (Core and Framework) inline with a formal specification call the .Net Standard which we are currently at version 2.0. The goal is to have the whole .Net ecosystem establish uniformity. The image below is directly from Microsoft and as they say “a picture paints a thousand words”, which I think makes their goal a bit more obvious.

I haven’t touched on Xamarin because I want to save that for its own post. Check out this post for a very quick intro though.

From the image and the statement above their goal is to have a single library which can be used for the .Net Framework, .Net Core, and Xamarin.

By use of the NuGet Package Manager we can download .Net Standard or third party created packages which contain just the constructs or functionality we need, greatly reducing our production footprint. An example is if you need a JSON parser you could swing by the NuGet Package Manager and add an implementation of your choice, such as JSON.Net to your project. This is a very easy and very quick way for us to develop and we don’t have to find implementations out on the web that may or may not do what we need.

In addition we have also been given Command Line Support, that is we can create, develop, build, test, and publish our applications all from the command line, again all cross-platform. Certainly worth looking into further so keep an eye out for that post if it interests you.
We also get the Universal Windows Platform (UWP), which you can see in the image above, which follows Microsofts write once run anywhere approach. By using UWP we can run the same code on multiple Windows platforms including IOT device (like Raspberry Pi), Mobiles Device running Windows 10 Mobile, PC, Hololens, Surface Devices, and even on Xbox One.

If you checked out the Xamarin post I mentioned above you’ll know Xamarin is a cross-platform development framework, so if we combined our development activities between .Net Core and Xamarin we can get both a huge amount of code reuse and a massive market penetration.

So to round of the .Net Core discussion, we can get a huge amount of code reuse through cross-platform support, we get more visibility of what the framework is doing and increased confidence through community involvement with it being Open-Source, we only use what we need resulting in smaller binaries and dependencies and increased performance, and we have full command line support. Quite exciting…

I hope you have found this post informative or at the very least interesting. I’m looking forward to writing up more .Net related posts in the future.

.NET: Framework

I’m going to start putting together some posts related to the world of .Net (pronounced ‘dot net’). I’m a big fan of .Net and figured it could be useful.

For this post I wanted to target the .Net Framework. So lets get stuck in…

The .NET Framework was formally released in 2002 by Microsoft and is one of the most commonly used methods of Software Development, hopefully something you’re interested in? If not, stick around, check out some of my other posts and see what happens…

The .Net Framework implements the Common Language Infrastructure (CLI) which is an Open Specification developed by Microsoft, with Hewlett-Packard, Intel, and others and the specification was standardised by the International Organisation for Standardization (ISO) and the European Computer Manufacturers Association (ECMA) completing in 2003.
The .Net Framework includes a large Class Library called the Framework Class Library (FCL) and a runtime environment called the Common Language Runtime (CLR).
Applications can be developed using multiple programming languages including C# (pronounced ‘C Sharp’), Visual Basic.NET, and F# (pronounced ‘F Sharp’) with language interoperability. We get a large number of APIs from the framework we can use for example database access using ADO.Net and the Entity Framework, web development including apps and services with ASP.Net, desktop services with the Windows Communication Foundation (WCF), and User Interface Development with Windows Presentation Foundation (WPF). So we can do quite a variety of development through the .Net Framework and to avoid getting side tracked I won’t go into any more detail on the above usage paths in this post, they really deserve their own posts.

Before we could develop .Net Framework applications we primarily used the Component Object Model (COM), at least on Windows, to share code across languages and for cross application communication. Sounds ok right? Well not really, COM was complicated and fragile. So Microsoft quite kindly developed the .Net Framework and it quickly took the place of COM.

Ok so we know .Net is great and all but what is it exactly?
As touched on above it comes in two primary parts the CLR and the FCL.

The runtime environment (CLR) takes a fair bit of work away from us such as finding, loading, and managing .Net objects. It means we don’t really need to worry about memory management (unless we’re using unmanaged code), app hosting, thread coordination, type safety, garbage collection, exception handling, security checks, and other low-level details. By use of Just-In-Time compilation, the runtime converts our managed code (called the Common Intermediate Language (CIL) into machine instructions, called binary, understood by the computer.
As part of the CLR, we get something called the Common Type System (CTS) which is a standard that describes type definitions and programming constructs supported by the runtime. The goal here is to allow our applications to easily share information among themselves, even if those applications are written in different .Net languages.
One more step down we have the Common Language Specification (CLS) and this specification defines a subset of type and constructs. If you end up writing a library where the types are only CLS-compliant this library can be used across all .Net languages.
See the image below for a visual representation…

Common Language Runtime visual

Lets look at the FCL, it is the .Net Frameworks implementation of the standard libraries defined by the Common Language Infrastructure (CLI). The core part of the FCL is the Base Class Library (BCL) as the name suggests this core section contains only the base essentials such as:

  • Threading
  • Security
  • File Input/Output (I/O)
  • Base types; such as Object
  • Collections
  • and more

I’ll put a post up that is just the namespaces and a brief description.

The FCL on a wider scale includes everything and can be used to develop any type of software application from web to console (Xbox).

So that’s a very quick introduction to the .Net Framework, hopefully you found it interesting.

If you have any questions feel free to ask.

Don’t Repeat Yourself

This week lets take a look at a(nother) useful principle, the Don’t Repeat Yourself (or DRY) principle. The goal here is to get an idea of how the principle works and then aim to incorporate it into your day to day development activities. Remember to try to combine it with the other principles we’ve looked at, for example the Single-Responsibility Principle from the SOLID principles.

So where’d this one come from?

Introduced by Andrew Hunt and David Thomas in their 2000 book “The Pragmatic Programmer” they define DRY as

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

What we are basically aiming for is don’t say (following Uncles Bobs tell a story advise) or do the same thing twice. If you have some logic that performs a useful job, or even if its not that useful, don’t repeat that code. Combined with Single-Responsibility you could wrap that it up behind a class or method and have a single definition (“authoritative representation”).

I read a quote once that is quite good, “The only bug free code is no code” another is “Code that doesn’t exist is code you don’t have to debug” (can’t remember the original authors). Essentially what both of these statements are telling us is, write as little code as you can to get the job done. By following DRY we can shrink our code base by reducing repetition and therefore reduce the chances of bugs or introducing bugs through changing the same piece of code found in multiples places.

Another benefit we get from DRY is if you have a block of code duplicated throughout your code base and you need to make a change to that block not only do you have to go and update all blocks but there’s a chance a bug can be introduced through this change (at the end of the day we’re only human). Now if that code block was held in one place, meaning only one update was required we save time and reduce the chance of introducing bugs.

One example is say we needed to modify a piece of data using the following steps:

const int TEMP = 10;
double x = 30;
double y = 5;

x = x * TEMP;
x + y;
x / 10;

Now instead of duplicating this “important” bit of code we can wrap it in a suitably Named class or method and simply call that.

That’s it really, if you find yourself writing the same piece of code more than once consider if it may be worth extracting all instances of that block.

Magic Mirror

Apologies this post has taken longer than I expected to get out. Its a bit different from the others as its about a specific project.

I decided to give making the Magic Mirror a go. It was my first (and I expect not my last) DIY project and I must say I really enjoyed doing it. I’ve been wanting to do it for a while but as with many people the first road block instantly made me move onto the next thing to do. Not very productive right?

So this time round when the urge to make it took me I put myself on the spot, I told my fiance I’d make it for her as a gift for her birthday, though I didn’t tell her exactly what it was only that I was making something myself.

So what is the Magic Mirror? If you haven’t checked out the above link I’d recommend you do they can probably explain it better than me. Essentially it’s a mirror that gives you information, be it the time, weather, news headlines, and a lot more. The image below shows my end product :

Magic Mirror

What I needed to make the Magic Mirror was:

  • A Monitor
  • A Raspberry Pi
  • A Frame
  • A Two-way / See through mirror
  • Wood
  • Tools

The Monitor

The monitor I used was Iiyama ProLite E2207WS, mainly because I’ve had it for years and wasn’t doing anything with it.


First thing I needed to do was get the frame off it, which wasn’t as bad as I thought it would be. Was quite worried I’d end up damaging the buttons but it worked out well. You do definitely need to be careful not to damage them as you may not be able to use the monitor if you do.

Monitor – No Frame
Monitor – Buttons

If you don’t have a monitor lying around make sure you get one where the HDMI connector would not be facing the wall. They should be facing towards the floor if the monitor was standing up right. Another useful option for the monitor would be to ensure it has a USB port capable of powering a Raspberry Pi, that way you only need to worry about one cable. Unfortunately for me my monitor did not have a USB port so I have two cables on display.

Raspberry Pi

I think it’s recommended you use the Raspberry Pi 3 to make the Magic Mirror but again I had a Raspberry PI 2 Model B+ lying about the house that I wanted to do something with. So instead of buying a new Raspberry Pi I just used that instead (and now have an excuse to get the Pi 3). There wasn’t really much to this bit to be fair, those behind the Magic Mirror software make it very easy to get things installed and moving, this is what you need to do:

Update OS

Generally its considered best practice to ensure your OS is up to date before you get started.

Install MagicMirror²

Open a bash terminal

Run the following:

bash -c "$(curl -sL https://raw.githubusercontent.com/MichMich/MagicMirror/master/installers/raspberry.sh)"

This will pull down and install everything we need to get the MagicMirror up and running. Once complete you should be presented with the MagicMirror UI.

The next thing you can do is modify the configuration slightly to better suit your needs. For example

  • Update the news feeds to a provider of your choice
  • Update the weather to your location or one you are interested in
  • Update the Calendar to your own
  • You can also add additional modules:
    • For information on this please check out the GitHub page here

Now finally for the Pi we need to make sure the Magic Mirror starts for us on boot. It is recommended to use PM2 for this.

Setting up PM2

First, we need to install it; so run the following from terminal:

sudo npm install -g pm2

Second, we need ensure PM2 starts on boot; run:

pm2 startup

Once that is finished, run the command it shows you.

Next up we need to add a start script for our Magic Mirror, it is recommended to put this script in the Magic Mirror directory (and its just cleaner)

cd ~
vi mm.sh 

If you’re not experienced with ‘vi’ try ‘nano‘ instead. Now we want to add our startup commands

cd ~/MagicMirror
DISPLAY=:0 npm stat

Save and close vi. Ensure script is executable by running:

chmod a+x mm.sh

Now lets start MagicMirror using PM2; run:

pm2 start mm.sh

MagicMirror should now start up. Once it does press the super key + tab to navigate back to your terminal and run the following to ensure PM2 starts your MagicMirror on boot.

pm2 save

That’s it, our software side of things is complete next up lets check out the Mirror itself.

The Mirror

The mirror was one of the things that stopped me going forward in the past. I’d spoken to a local Mirror shop and they made it sound like getting the two way sheet/glass was going to be really difficult. Annoyingly if I’d just taken a look on the web I’d have found it. This time round I took to Amazon a tried to hunt something down, luckily I found a seller called Sign Materials Direct who sold something along the lines that I needed.

Unfortunately they didn’t sell it the size I needed as the sheet needs to match the monitor. Luckily, again, however someone had had a similar issue as me and had asked the question about customisation. Turns out they can cut the material to “any shape or size” you need. So I got in touch with them at the email address in the link (who got back to me very quickly) and within a few days I had my custom cut sheet that fit perfectly.

If you’re in the UK I would highly recommend you give them a try, I was really happy with the price and speed.

Also best to add I’m not affiliated with Sign Materials Direct, just a happy customer.

Custom Cut Acrylic Mirror

The Frame

At first I was thinking I might make the frame myself as I was going to have to make a housing unit to put the monitor in and then attach it to the frame anyway. Then I realised, I am neither a carpenter nor an artist and threw that idea a way.

We have a large Mirror in our living room and thought getting a matching frame would look quite good. So off to the local Mirror shop I went again, that’s where the larger mirror came from, and that was a no go as their supplier doesn’t do custom sizes. But moving on anyway I gave the internet a go and found a company who make custom frames and thought they looked really nice. So I got the frame from there.


Now I was a bit worried about was may be I’d measured something wrong or had typed something wrong, made a “school boy error” or one of the pieces didn’t fit right etc. But nope it was all good, the frame and Mirror fit perfectly.

Frame and Mirror

The Wood

The next step was to get some decent wood and cut it to house the monitor and Raspberry Pi. I didn’t have a saw so this was a pretty good excuse to get an over the top tool for the job right?

The wood I used was Redwood planed timber, apparently normally used for door frames. But did the job quite nicely in this instance. Unfortunately I didn’t take any pictures for that part which may have been the most interesting, wood cuttings everywhere and a large circular saw.

Anyway I ended up with a bit more than I needed, mainly because I cut one piece slightly too small so had to go buy some more. Though I’m sure I’ll be needing to cut it for a shelf or something… or to just cut it. So got the wood cut to the right size, screwed the pieces together, and painted the whole thing white. Eventually, once dried, got the housing unit attached to the back of the frame with perhaps a few too many screws than was needed but better safe than sorry right?

One thing I would add if you decided to do this yourself, I can’t imagine the unit would get too hot but I drilled two large(ish) holes into the top of the housing unit to allow some kind of air flow just in case. I’m not planning on leaving it powered on all the time but I would say it’s worth putting the holes there.

On the wall

The final job, getting it on the wall. This step was easier to do than I thought it would be as the wall had a large screw in it already for the Magic Mirror to rest on all I needed to add was something to stabilise it. This was done by adding some 90° hooks to the underside of the Housing unit to give the overall structure more support. I may add some more hooks to the top of the housing unit to make sure but it is certainly well secure now.


Well thanks for sticking all the way through this post. I know it’s not my normal style but I thought I might try adding this as it’s the first project I gave a go and am quite pleased with the results. The fiance really likes it so that’s good.

With all the modules available for the Magic Mirror and no doubt some more really cool ones that will come out the in future I’m going to add to what I’ve got so far. Things like facial recognition, Alexa integration, calendar notifications, and there’s plenty more.

Hopefully you decide to give it a go yourself and if there’s anything I can help with please get in touch.

SOLID Principles

I thought this week we’d check out the SOLID principles I touched on in previous posts (here and here). I’m thinking get a quick overview like we did with Design Patterns and may be we can then take another look at each principal again in more detail in the future.

So what exactly are the SOLID principles? They are really good guidance on how to design and develop highly maintainable, flexible, and easily understood software. Initially coined by Uncle Bob, if you read my previous post you know to check out one of his books, Clean Code, in a paper he wrote in 2000 called Design Principles and Design Patterns with the term SOLID coming later.

SOLID itself is made up of 5 principles, each a letter in the SOLID acronym, so lets get started…

Single Responsibility Principle

The first principle, and the ‘S’ in SOLID, is the Single Responsibility principle. This principle at its core means:

There should never be more than one reason for a class to change

In essence a class should only ever do one thing, it should have one responsibility. This allows us to know exactly what our class is doing and why.

One of the easiest ways to apply this principle is if the description of your class or method has the word ‘and’ in it you may well be doing more than one thing.

By following this principle we can reduce coupling between classes and improve the maintainability of our code.

Open-Closed Principle

Next up is ‘O’, that is the Open-Closed principle. The Open-Closed principle states:

Software Entities (Classes, Modules, Functions, etc) should be open for extension, but closed for modification’

So what does that mean? It means our goal is to ensure we can always extend the behaviour of an entity, lets say we have requirement changes or future developments but at the same time we do not modify the existing behaviour of an entity.

Personally I view this more from the perspective of the interface, we should always strive to maintain compatibility and, ideally, never introduce breaking changes by modifying existing entities but we can add modifications by extending it.

Liskov Substitution Principle

This one initially I found more confusing that the previous two but the more you use and understand these principles the better. Developed by Barbara Liskov in her keynote in 1987 called ‘Data Abstraction”, within the realm of SOLID it is defined as:

Functions that user pointers or references to base classes must be able to use objects of derived classes without knowing it”

This principal states that an object of a base class must be replaceable with an object that is of a child class (Polymorphism?).

As above I like to apply this by thinking interfaces, if I expect an interface within my code base, then any class than implements that interface and therefore fulfills the contract can be used.

Interface Segregation Principle

The ‘I’ in SOLID is for the Interface Segregation Principle, developed by Uncle Bob when he was consulting for Xerox. The principle is defined as:

Clients should not be forced to depend upon interfaces that they do not use.

Similar to the Single-Responsibility Principle mentioned above the goal here is to reduce the amount of “stuff” our code does and so reduce the complexities within the code base.

This one I like to think of as we should aim to define a lot of related interfaces instead of one massive ‘Jack of all Trades’ interface. The aim is to provide clients only with what they need instead of forcing them to use something “bloated”. The nice outcome of having multiple interfaces means a client can implement only those they require and are not forced to provide implementations for functionality they won’t or don’t need to use.

So our goal is to have a number of interfaces that can be used to make up the whole but can also be split and implemented on a smaller scale.

Dependency Inversion Principle

So the last principal, ‘D’; the Dependency Inversion Principal. Again developed by Uncle Bob, in a report he produce in 1996 called ‘The Dependency Inversion Principal’ (certainly worth a read). Interestingly this principal actually comes in two parts, defined as:


What this principal is trying to enforce is that changes at the low-level do not cause cascading effects to the high-level. Our high-level modules contain the business logic of our applications, now if these modules depend on our low-level modules changes to the low-level may have direct consequences for our high-level and vice versa. Our goal should be to decouple the high-level modules from the low-level (again vice versa).

Why? Mainly re-usability and less fragility, by decoupling our code we increase the portability of our classes or modules (don’t reinvent the wheel) and we make our code less prone to breaking when changes are made.


Well there we go, this has been a short introduction to the SOLID principals and hopefully has given you some information on what they are and how they may be used day to day. I may produce a post for each principal and provide examples in C#, if the interest is there.

One thing I have found is that the principals play well with each other and so as we try to apply them more and more each day to our development practices they quite easily become habit.

As usual feel free to get in touch either by the comments or the contact us page.


My posts up to now have been some general introduction the Software Development and I wanted to continue that trend (and do so for a while longer). This time I thought we’d take a look at something I think is very important, Names.

As Software developers we name a lot of things; methods, functions, data members, properties, variables, arguments, packages, and of course classes. One thing I have aimed to do since reading ‘Clean Code’ by Robert Martin (also known as Uncle Bob), is to give EVERYTHING I name decent intention-revealing names. That is, I try to name things in such a way that anyone (including myself) who reads this code can have a clue what my intention was when I originally wrote it.

One of the main goals of intention-revealing names is it should result in an almost comment free code base, granted you will have some comments in there (and as I like to try to use the StyleCop style guide for Visual Studio I don’t fully adhere to this rule myself). Lets check out an example to see what I mean.

Lets say you’re reviewing code trying to find a bug (the code is what you wrote 6 months ago) and you come across this.

int T; // time

It is not unreasonable to think, what is time, time since when, why is this time important and why is it named T? In reality if it had just been given a more intention-revealing name there wouldn’t be the need for the reader to go hunting what it is and what it’s for. Instead it could be

int TimeSincePlayerLastMadeMove;

The above code gives far more information that ‘T’, (although if I’m honest I’d say we’d want to go one step further and apply the Single-Responsibility Principle and wrap this up in a class, but I’ll go over that in more details in another post).

We also want to avoid adding unnecessary information that just added to the confusion, so for example if we started off with

List<GamePiece> PieceList; // Players Game Pieces

We’ve included the collection type in the name but what if we change it from a list to something else, say an ICollection? Whilst the name infers its a collection of some sort any future user may well try to make use of it as if it were a list, as mentioned leading to confusion. Instead we could use

List<GamePiece> GamePieces;

An obvious issue Uncle Bob raised was that we should make meaningful distinctions within our code and not just code to make the compiler happy and we should avoid what he calls ‘noise’ words like, a, an, and the, that again may add confusion where there just isn’t any need to have it.

The book explains that we should strive to use pronounceable names, things that when we are in a meeting people know what it is we are talking about. Unfortunately I don’t have a personal example of this (though I would say acronyms are one of my biggest pet peeves in meetings), however he does have one he explains that he had come across ‘genymdhms’ which stood for ‘generation date, year, month, day, hour, minute, and second), not ideal really.

I won’t run through the entire section from the book there’s just a few more points I’d like to mention.

Class and method names, now class names should be a noun such as Customer or Account, they are names of things and so we should be using naming words. Method names should be a verb such as DeleteAccount, GetUserName, or SaveGame, they are things that do something so we should be using doing words. We’re very lucky today that if we name something and decide it’s not the right name we can quite easily refactor our code and have a name change ripple throughout the code base.

A key point that I think Uncle Bob raises that is spot on is the reason people use shorter less meaningful names is to ensure it shortens the code for that line or it means the required coding standard. Two of the coding standards I’ve come across in the past was MISRA and Google C++ Style, and they have a line length limit. I can’t recall the MISRA length I think it was between 80 and 95 lines but the Google Style limit was 80 with additional rules when 120 and 150 was broken (though in all honesty I think 150 is a bit much). The main reason for both of these limits was a throwback to when screen sizes were limited and if code was going to be printed. Now for the printed code issue I have not and I don’t know anyone who has had to print code out (at least in the past few years). For the former as our monitors are continuously progressing this just seems like a poor reason and I can’t see how it will continue as a rule. Though to be fair Google themselves state it is a throwback to the 1960s (link here).

So there we go, one of our many aims of writing code should be to make the things we name far less confusing than they need to be. Granted I am cherry picking things here and I do highly recommend any and every one interested in writing software pick up Uncle Bobs book and give it a read.

Design Patterns

Going to keep this one small and to the point today and we’re going to take a look at Design Patterns.

Essentially design patterns are a reusable solution to solving our software development problems. This is great because it means someone has taken the time to put together problem solving templates that we can apply during our development.

Now there are a lot of design patterns out there and I mean a lot but they all, generally, fall into 3 categories:

  • Creational
    • Concerned with the creation of objects
  • Structural
    • Concerned with the relations between classes and objects.
  • Behavioral
    • Concerned with the interactions between objects

The above 3 categories pretty much sum up all the design patterns we need to worry about about.

So how did we get to the point where we have well-know and well-defined design patterns we can use? It started with the release of a book called Design Patterns: Elements of Reusable Object-Oriented Software in 1995 by a group called the Gang of Four (GoF). The GoF consisted of:

  • Erich Gamma
  • John Vlissides
  • Richard Helm
  • Ralph Johnson

Which even today is considered one of the most reliable resources on software design patterns. From this book we got 23 design patterns split into the above 3 categories.

  • Creational: 5 Patterns
    • Abstract Factory
    • Builder
    • Factory
    • Prototype
    • Singleton
  • Structural: 7 Patterns
    • Adapter
    • Bridge
    • Composite
    • Decorator
    • Facade
    • Flyweight
    • Proxy
  • Behavioral: 11 Patterns
    • Chain of responsibility
    • Command
    • Interpreter
    • Iterator
    • Mediator
    • Memento
    • Observer
    • State
    • Strategy
    • Template
    • Visitor

Well that’s quite a list isn’t it, and it’s only grown since then.

So how do these patterns actually help us with developing software?

I’ll pick that up next with posts dedicated to each category, first up: Creational.

I welcome feedback, especially at this very early stage, so any improvements you think I can make please get in touch either in a comment below or send me a message and I’ll get back to you.

Four Pillars of Object Oriented Programming


This week we’re going to look at the Four Pillars of Object Oriented Programming as a nice follow on from the last post, Procedural and Object Oriented Programming, the plan here is we’re going to expand on what was mentioned and get more information on what the Four Pillars actually are, in a simple and to the point way. So first off, lets name them:

  1. Abstraction
  2. Encapsulation
  3. Inheritance
  4. Polymorphism

As a side note there is also the term Three Pillars of Object Oriented Programming, this version however does not include Abstraction. So I went for doing a post on the ‘Four Pillars’ as it would literally give the best of both worlds.


The first Pillar we are going to look at is ‘Abstraction’, also called ‘Data Abstraction’, this is the means by which we only show (externally to our class) what is absolutely necessary and we hide all other aspects of the class. This is where the access modifiers I mentioned in Procedural and Object Oriented Programming come in, that is public and private (there is also protected and for C# internal but we’ll just concentrate on public and private). So, in a class those parts (data members and methods) we want other entities to see we make them public and the parts we want to keep hidden we make private.

A good example of this would be the consider a car. When we want a car to turn left we use the steering wheel. How exactly the wheels move in the correct direction we don’t really need to know, just how we move the steering wheel (our public method) ensures all the actions ‘under the hood’ (our private methods) makes the wheels turn in our desired direction.

So Abstraction, also called Data Abstraction, is the process of hiding data and information.


Encapsulation is the process of containing or ‘encapsulating’ our state and behaviour, that is our data members and methods within a self contained entity, for example a class (check out my Procedural and Object Oriented Programming post for a bit more info). When applying our Access Modifiers (see Abstraction above) we can limit who has access to that state and behaviour. Encapsulation is a very useful mechanism as it allows us to place all the related ‘stuff’ of a class (data members and methods) within one nicely contained and well named package, i.e. class (I’ve made a point of well named because after reading a book by Robert Martin called ‘Clean Code’ it is a must, I recommend you read this but I’ll be giving you some details in the future).

And that’s it encapsulation is that simple, wrapping up state and behaviour within a single class.


Similar to how we consider the term in the real world when applied to parentage. It is the process in which we allow our classes to get data members and methods, state and behaviour, from other classes. Why would we want this? Why not write each class individually? Well the biggest answer is code reuse, the more code we write the more bugs we bring into a system and the more code we have to maintain. If we can write a class then allow other classes to take that functionality and use the content without having to write it that is very helpful. It’s similar to how a child gets some characteristics from their parents. We call the class providing the members and methods the parent or base class and the class inheriting the members and methods the child or sub class.

Different languages have different versions of inheritance, so C++ allows a child class to have multiple parent classes, there are pros and cons to this, one of the most well know cons is the diamond problem (to avoid getting side tracked I’ll do a post on this in the future). Another form of inheritance is single parent inheritance (from classes), in this case we can only ever have one parent class and of course this can go up the chain, so our parent class can have one parent class and so on, this is the case in C#. In C# however we concentrate more on the use of interfaces, which is where we define a contract and leave it up the the inheriting classes to provide the implementation (again this will definitely be a future post, very important).

So the easiest way to know if you need inheritance is think can I apply the ‘is-a’ relationship, i.e. a car is a vehicle, so in this circumstance we may have a vehicle class and a car class where we may, and probably should, have our car inherit the characteristics of vehicle.


The final pillar we’re going to look at is Polymorphism, ‘poly’ means ‘many’ and ‘morph’ means ‘forms’ so ‘Polymorphism’ means ‘many forms’. It allows use to define, say, a method in our base class and allow our child classes to provide their own implementation of that method. A good example is if we have an Animal base class with a method called ‘MakeSound’, now the result of ‘MakeSound’ would be different for pretty much every animal we created, for example a Dog would bark, a Cat would meow, a Lion would roar, and a (male) Grasshopper makes their noise by rubbing a hind leg on their wings.

It essentially allows for a contract to be agreed, so in terms of our animal example every animal that inherits from the Animal class are essentially saying I will provide you with my own ‘MakeSound’ implementation.

This one tends to be a bit trickier than the other pillars but it is quite that simple, if there is any interest in some more information please free feel to ask.


And there we go a quick view on the Four Pillars of  Object Oriented Programming. Hopefully that has given you some insight however if you have any questions please feel free to ask.