Category Archives: Introduction

AOSP: Architecture

Previously I wrote an article for getting your own copy of the AOSP source, AOSP: Download, I thought before we go a head and build a ROM it would be worth having a quick run through of the AOSP architecture so we can understand where we should be making our changes if/when we want to. 

Below is the high level architecture diagram of the AOSP provided by the AOSP website.

AOSP Architecture
AOSP Architecture

As you can see we have multiple layers within the system, they are:

Application Framework Used primarily by developers to create Android Apps. 
Binder IPC This layer allows for Inter-Process Communication (IPC). That is it allows Apps to cross process boundaries, similar to COM in Windows. 
System Services A collection of co-operating services which should be thought of as modular focused components. There are two groups of services, as shown above, System and Media services.
Hardware Abstraction Layer (HAL) As the name suggests this is an abstraction layer between Android and the underlying Hardware. The HAL defines an interface that must be implemented by hardware vendors.
Linux Kernel Android is built on top of the Linux Kernel and as with most distributions Android patches the Kernel to meet its needs.

This is a very quick introduction to the different layers of the AOSP and I’ll go into more detail for each layer in future posts and as we go along but I think this gives you a good idea of what to expect. Next we’ll build a ROM and run it in an emulator.

Android is a trademark of Google LLC.

AOSP: Download

I wrote a Bitcoin Node setup post recently (hopefully you had a go?) and thought I’d show you how to setup an Android Open Source Project (AOSP) development environment and how to download the AOSP. 

This tutorial will be for a Ubuntu 18.04 machine only. Other systems should be ok, such as ElementaryOS or newer versions of Ubuntu, but I’ve not tried any other. Feel free to get in touch if there is an OS you’d like me to demonstrate or if you’re having any issues

First things first, open a terminal using ctrl+tab+t and type the following:

sudo apt-get update

Followed by:

sudo apt-get upgrade

Once they are done we will have an up to date machine and should be good to get stuck into setting up for AOSP.

Our plan is to get the official AOSP source on our machines, if you want to check it out now you can visit for more on what we’re going to download.

If you popped by the link you’ll see there are a lot of repos available to you. Don’t manually download each one, we’ll get to that using the repo tool.

We need to install some packages to get our machine ready, so run:

sudo apt-get install git-core gnupg flex bison gperf build-essential zip curl zlib1g-dev gcc-multilib g++-multilib libc6-dev-i386 lib32ncurses5-dev x11proto-core-dev libx11-dev lib32z-dev libgl1-mesa-dev libxml2-utils xsltproc unzip

Let’s install repo then:

sudo apt-get install curl
mkdir ~/bin
curl > ~/bin/repo

Now that we have the repo tool we need to make it executable

chmod a+x ~/bin/repo

Incase you run into any problems I had to update my repo so it called python3 instead of python. To do this navigate to ~/bin, open repo, and run edit the first line from

#!/usr/bin/env python


#!/usr/bin/env python3

This tool will allow us to pull from multiple git repos to create our Android™ distribution.

We tell it the repos we want to pull using a manifest.xml file, which describes the projects to pull and their locations.

Now we want to get git, run the following:

sudo apt-get install git

Moving on to the reason we are here getting ourselves a copy of the AOSP:

First, lets create our directory.

mkdir -p ~/android/android-9.0.0_r50

Next up, navigate to the new directory

cd ~/android/android-9.0.0_r50

Then download the version of the source you want, see below for more details

repo init -u -b android-9.0.0_r50

Then finally sync it, this will take a while.

repo sync

Same as above and worth considering is using the -j flag with your sync, it allows you to sync repos in parallel. By default it will do 4 parallel downloads.

To get a different version of the AOSP visit and pick the one you’re after. Simply update ‘android-9.0.0_r50’ above with the Tag of the build you choose.

Great, so now we have our Android™ repos downloaded. 

We’ll leave it at that for this post but I’ll be putting another out soon where we will build and create our own Android™ Distribution to run on an emulator.


Android is a trademark of Google LLC.

Bitcoin: Getting a Core Node

I’ve recently become interested in Blockchain technologies and thought I’d get involved with Bitcoin and go from there. 

The experience has been really enjoyable and I’ve learned quite a bit. So figured I’d pass that on to others who were interested and wanted to start by showing you how to set up your own Bitcoin Core Node. 

Bitcoin Core implements all parts of Bitcoin. The first implementation was made by Satoshi Nakamoto before the completion of the WhitePaper and was known as “Bitcoin” or “Satoshi Client”. Since its release, it has been heavily modded and improved and is now known as Bitcoin Core and is the reference implementation for the bitcoin system. Essentially this means we should refer to Bitcoin Core on how parts of the tech should be implemented.

For the purposes of this post, I just want to show you how to set up your own Bitcoin Core node by pulling down the source code, building it, and running your own Node.

First, we want to set up our development environment, I’ll be running on a Ubuntu 18.04 virtual machine. Ensure your box is up-to-date and has git installed (throughout this post I’m assuming you know the basics of version control).

Next, we want to get hold of the source code. There are a few ways to do this but I find the easiest is simply to clone the repo from GitHub. You’ll want to get a copy of the latest version. 

Open a terminal, navigate to a preferred directory and run the following:

git clone

Now we have the latest version of the bitcoin source code we’ll want to select the release (the latest) and provision our machines to build.

We can use the git tag command to get a list of available releases. 

In your terminal, run the following:

git tag

Choose the latest release shown and run the following:

git checkout <release>

You can confirm you have the correct version checkout by running:

git status

The result will contain the following:

HEAD detached at <release>
nothing to commit, working directory clean

Great stuff. We’ve got the correct release of the Bitcoin source code we want to build, now we just need to make sure our machine is provisioned to do so.

From the root of the repo navigate to the doc directory in here you will find instructions on how to provision your machine to build your node. As I’m on a Ubuntu machine I will use but there are instructions for macOS and Windows.

Open your instructions:


Ensure you read the instructions carefully to avoid any potential issues. As I’m on a Ubuntu machine I will follow those steps:

First, install the build requirements:

sudo apt-get install build-essential libtool autotools-dev automake pkg-config bsdmainutils python3

Then, install the dependencies:

sudo apt-get install libssl-dev libevent-dev libboost-system-dev libboost-filesystem-dev libboost-chrono-dev libboost-test-dev libboost-thread-devsudo libminiupnpc-dev

Now our machines are provisioned we can start getting the build process moving.

First, we need to generate a collection of config scripts, these scripts will review your machine and discover the correct settings and that you have the correct dependencies for building. We can do this by running the autogen script found in the root of the repo.

Navigate to the repo root and run:


As part of the autogen step, we get a file called configure generated at the root. This file is used to automatically identify the required libraries and create a build script for your machine.

Let’s run that script then:


If you get the following error:

configure: error: libdb_cxx headers missing, Bitcoin Core requires this library for wallet functionality (--disable-wallet to disable wallet functionality)

Check out this solution

Now, hopefully, that went smoothly!

Let’s compile, this will take some time, run:


If your machine can manage it would be worth adding parallel compilation using -j, e.g. make -j 2 will use two cores.

It is advised that you run the suite of unit tests available once compilation has completed. You can do this by running:

make check

Finally, we need to install the executables on our system by running the following command:

sudo make install

That’s it for building Bitcoin Core, we can now run it.

To run our node we use the following command:


Though for now at least I recommend running:

bitcoind -printtoconsole

The addition of -printtoconsole will feed output to the console soo you start to get an idea of what is happening.

Congratulations, you now have your own bitcoin core node up and running, it will take quite a while to get up to date with the current ecosystem.

To make sure you are aware of the full implications of running a bitcoin node I recommend you check out this link.

This post is for educational purposes only and I do not make any recommendations on whether you should or should not run a bitcoin node. The risks and costs associated must be weighted by yourself.

Understanding the MVVM Pattern

I mentioned in a previous post that I had become a Xamarin Certified Mobile Professional (check it out for an introduction too) which I really enjoyed getting to and learned a lot of about Xamarin, .Net, and the platforms involved. I’m going to start sharing more of what what I’ve learned starting now.

For this post as the title suggests we are going to look at the MVVM pattern…

MVVM is a software architecture pattern and is a variation of the MVC (Model-View-Controller Pattern). The acronym stands for Model-View-ViewModel, which identifies the key components within an MVVM targeted app but we’ll get to that.

The pattern is concerned with helping us decouple our business and presentation logic from the user interface.

Why would we want to do this?

By providing that separation between the components we can improve our development by allowing different teams (such as the design team and the development team) to work to an agreed contract and develop their own components independent of each other.

As the name suggests the Model-View-ViewModel pattern has 3 core components, Model, View, and ViewModel. As they say a picture (diagram in this case) paints a thousand words so hopefully the diagram below gives you an idea of the relationship

At a high level the interactions are as follows:
  • View: Knows about the ViewModel
  • ViewModel: Knows about the Model but does not know about the View
  • Model: Does not know about the View or ViewModel

As you can see the ViewModel provides a layer of separation between our View and Model components and through the use of Data Binding we can decouple our ViewModels from our Views. By developing our apps in this way we get quite a few benefits:

  • Adaptive: We can take old Models and adapt them for use in our apps through our ViewModels. e.g. models developed for a web app that aren’t suitable for mobile can be adapted via the ViewModel to provide our Views what they need.
  • Testable: Our unit tests can be developed and maintained easier as our C# code is separate from our UI.
  • Decoupled: We reduce the dependencies between the components and so can update them independent of the others, providing of course we continue to adhere to our contracts.

Now that we have an overall idea of MVVM lets take a look at each component.


The first component we’ll look at is the View. It is responsible for what the user  sees, the structure, layout, and appearance of each Page. Within Xamarin we have two files, a .xaml file and a .xaml.cs file, which together define our views (or pages). As the names suggest our .xaml file will hold our Xaml implementation of the UI and the .xaml.cs file will hold our C# code-behind.

So we can develop our Views using either Xaml or C#, with the use of the MVVM pattern in our apps we aim to do everything in Xaml and have the bare-minimum C#, however in some cases we may need to use C#, such as with animations.

One question you may have now is if we aim to put as little code in our View code-behind how do we allow users to do anything, such as fill in a form and submit it?

For this we have Commands, if a control (an visual item) supports the ICommand interface we can register to it to a property on it’s ViewModel (which must implement this interface). So when a controls ICommand action is triggered the associated property on the ViewModel is triggered. Additionally there are features called Behaviours, these can be attached to an object in the view and wait for either a command to be triggered or an event to be raised. Once the behaviour has been triggered it can then either call the ICommand property or a method in the ViewModel.


Our ViewModels provide Properties and Commands for our views to attach to through the use of Data Binding and through this relationship it can flow the UI presentation logic to the View without needing to know about the View. This is done through the use of the INotifyPropertyChanged interface which uses the OnPropertyChanged method to notify the View that a change has occurred, making apps very responsive if implemented correctly. 

Think of this relationship as the View is used to define how the functionality will be displayed and the ViewModel, through the use of properties and commands, defines what will be displayed in the UI.

Looking at the diagram above again you can see that our ViewModel sits between our View and our Model, this is no accident. Through the ViewModel we can co-ordinate our Views based on based on our Models and updated our Models based on our Views through the ViewModel, providing that layer of separation between the two components. In some cases the ViewModel will expose the Model directly to the View but we also have the control to perform data modification in the ViewModel to meet the needs of our expected presentation design. A very simple example could be convertign data from one format to another such as from lbs to kgs. 

As I mentioned above we perform the communication between our Views and ViewModels through DataBinding. Data Binding allows us to hook our UI elements to properites in our ViewModel and then provide either one or two way communication between the two components. 

For the ViewModel to View communication to occurr the relevant property must raise the property changed event, called OnPropertyChanged. In order to do this a ViewModel must implement the INotifyPropertyChanged interface and raise the event when a relevant property is changed. By doing this whenever the property is changed the relevant UI element will automatically be passed the new value to display. Providing up to date information to the user.

If you have a list or other collection it is a bit easier to perform that communication, use the data structure ObservableCollection<T> and much of the work is done as it implements the INotifyCollectionChanged interface. All you need to do is hook it up.


Lastly, we have the final component, the ‘Model’. This is quite straightforward, the Model is any class not directly associated with the visual aspect and can be viewed as representing the domain model.

This generally involves our data model and our business and validation logic. Examples of Model level entities are Plain Old CLR Objects (POCOs) but also include Data Transfer Objects (DTOs), entity, and proxy objects.

We primarily use this component with services and repositories for data access and caching.

So there we have it an intro to MVVM and the core components. Our View handles the formating of what the user sees, our ViewModel what will be seen, and the Model helps to organise and process.

The pattern is very popular within the mobile development domain and I highly recommend you give it a go if you haven’t already to see if you could benefit from it.

Keep and eye out for the next post.

C#: Hello World

Let’s write some actual code today and there are few places better to start than with the traditional ‘Hello World’ program.

For this post, I want to give an introduction to writing our first functional C# program. It will not do much but hopefully, for those new to software development it will be interesting.

We’ll develop the app by using PowerShell (or command line but some of the commands will be different) and a simple text editor (Notepad) but if you want to you can follow through in Visual Studio, I’ve put a setup guide together for Visual Studio 2017 if you need/want it.

The first thing we’ll do is check we have dotnet installed. Open PowerShell and run the command

dotnet --version

You should see something along the lines of the following:

dotnet version

If you don’t see this or you’re told dotnet isn’t installed, check out this guide to install it.

Great, so dotnet is installed and we can now start writing our code.

We’ll want to go ahead and create our project directory and change into it.

mkdir hello_world
Make project directory

Now that we have our project directory we can create the project itself. We are going to be making a console app and all we want it to do is print ‘Hello World!’ to the console. The easiest way to create the app is from the command line, as we are there already. There are a few options available to us, and not just for console apps, but I’ll put a post together about that another time. For now, we just want a console app created with the following command.

dotnet new console
dotnet new console

We should now have two files inside our hello_world directory, a Program.cs file and a hello_world.csproj file.

new project contents

The Program.cs file will contain our code, it currently has the default codebase provided by dotnet for a console app, shown below in the red box.

Some of you may notice that it appears this app does what we want it to already. We’ll make some changes so we can introduce some other components in software development.

The hello_world.csproj file is our project file, this is an MSBuild project file for a C# project. It contains information about the project and how to build it. For the purposes of this post, its not something we need to worry about just that we need the default.

Let’s modify our Program.cs file to provide us with a bit more to work with, to open the Program.cs file in a text editor (on Windows) run the following command in the hello_world project directory.

Notepad Program.cs
Open Program.cs in Notepad

This will open the Program.cs file in Notepad as shown below.

Program.cs open in Notepad

Great stuff, now let’s make some changes. Update your Program.cs file inline with the code below.

using System;    // 1. Importing namespace

namespace hello_world    // 2. Hello World namespace
    public class Program    // 3. Class declaration
        static void Main(string[] args) // 4. Method declaration
            string helloWorldMessage = "Hello World!"; // 5. Statement 1
            Console.WriteLine(helloWorldMessage); // 6. Statement 2
        } // End of Method
    } // End of Class
} // End of namespace

Now that we have some code to work with let’s go through it.

Firstly, I’d like to explain what ‘//’ means, everything on the line after the ‘//’ is called a comment and is ignored by the app. That means it doesn’t matter what we put there the compiler isn’t interested in it. The text (in grey) is for humans only.

Now let’s start going over the code line by line.

using System;

This line allows us to reduce our codebase, by including it we don’t have to specify the fully qualified domain name to access the contents of the System namespace. See the below example

// Without 'using System;' we'd have to use the fully qualified domain name to use WriteLine

// But because we did include 'using System;' at the beginning we can omit it and still access WriteLine

Following that, we have our namespace declaration.

namespace hello_world
... // Rest of the code.

The first line declares the name of the namespace which is hello_world, think of a namespace as a container so we can organise our code. Everything between the opening { and closing } is contained within the scope of the namespace and so is part of it.

NOTE: Any code caught between ‘{‘ and ‘}’ is said to be in the scope of whatever created that scope; namespaces, classes, structs, methods, loops etc.

Next up we have our class definition:

public class Program
{ // No need to talk about scopes again

To define a class with follow the class definition syntax

// [access modifier] - [class] - [identifier]
public class Program

Our access modifier is public, there are several access modifiers in C# but for now public means this class can be accessed by any other code either in the assembly or any other assembly that references it. We can skip adding the access modifier and the compiler will then use the default internalwhich allows only code in the same assembly to use it.

NOTE: If you omit the access modifier anywhere in C# the complier will plug in the most restricted access that can be declared for that member.

As we are creating a class we use the class keyword to define it as such.

Finally, we provide the identifier for the class, that is we provide the name and for us that’s Program.

Next, we define the first and only method within our Program class, called Main

As with a class definition we have a set method definition syntax we can follow.

// [access modifier] - [optional modifier] - [return type] - [Identifier] (Parameter List)
static void Main(string[] args)

Main‘ is important as it is the entry point for our program, this is where the app starts from as far as we are concerned. Interesting because the ‘Main‘ method is marked as the entry point in IL, see this post for more information on how .Net works, it doesn’t have to be marked with an access modifier, i.e. made public, because behind the scenes the CLR knows where to start from.

First up then we have the static keyword which is an optional modifier. Because Main is our entry point we don’t need to worry about static for now but I’ll link to the relevant post explaining static in detail in the future. For now, if you’ve read my OO post static associates the method with the class and not with an object created from the class.

Next is our return type, by default, we don’t return anything from
Main so we use the keyword void to identify it as such however this can be any data type. Traditionally the type has been returned by value but as of C# 7.0 it can not be returned by reference, again not something to worry about now, I’ll do a post explaining value and reference types.

Following our return type, we have the identifier, again this is the name. Quite straight forward this one, the name is ‘Main‘.

Finally, we have our parameter list, this is simply a list of 0 or more types and names required by the method and the corresponding arguments provided by the client. In this case, it would be any arguments passed in when starting the app.

NOTE: To confirm and avoid any confusion in the future, a method defines what parameters are required and a caller of the method provides arguments for each parameter.

Now that we have our method defined we can move onto our first (and only) statement block, this contains the statements (actions) which are performed sequentially within the scope of the method.

We have two statements, the first:

string helloWorldMessage = "Hello World!";

This is a declaration statement, we are declaring a new local variable of type string and naming it helloWorldMessage. We use an expression to assign the value “Hello World!” to our new variable using the assignment operator ‘=‘. To access the value of the string we then use helloWorldMessage, as shown in the next line.

A variable is a storage location which can contain different values over time but not different types, this is because C# is a strongly-typed language. We must know what each storage location will contain at compile time. So each variable is of a specific type and the type defines the characteristics of the storage location. In our case, we have a string type which means we can only put text (a sequence of read-only characters) into our storage location.

Our next statement is:


Here we are using a method provided by .Net Core found in the System namespace. The method itself is static, provides no return type, and is in the Console class. There are multiple versions of the WriteLine method each having a different collection of parameters. For us, the parameter is a string, as we are passing in our string contained in helloWorldMessage, so we are using the following fully qualified Console.WriteLine

System.Console.WriteLine(string value);

This method, as described by the Microsoft documentation:

Writes the specified string value, followed by the current line terminator, to the standard output stream.

The remaining content in the file is scope closing ‘}”s described above.

Ok, so we’ve written our program and we’ve worked through it and have an understanding of what each part does. Now let’s run it.

In your terminal, make sure you’re inside the project directory, run the following:

dotnet run

This will build our code using the dotnet build command and then run it for us. You should see something shown below.

dotnet run

Congratulations. Doesn’t do much and has no real value except to us but a ‘Hello World’ app is the starting point for almost all developers.

I’ll add a post providing C# keywords.

For now, why not play around with it and have it print different things to the screen. For those, a bit more curious you might’ve noticed my text is a different colour to yours, have a go and find out how you can change the text colour from inside your app.

Update: Code can be found here.

Test Driven Development

For this post, I wanted to take a look at Test-Driven Development (TDD). What is TDD? TDD is a test-first technique in software development, that means we write our tests first then we write our actual production code. The TDD approach relies heavily on the repetition of a very short development cycle while following The Three Laws of TDD.

Always remember the goal of TDD is the make our code cleaner and simpler.

Before we get into the good bits I think it might be worth quickly going over the way software has generally been written and identify some of the flaws. It is more common to see production code written first with the unit tests being more of an afterthought, still it less common today than say 20 years ago. However, we still have the same problems, when our codebase does not contain reliable extensive tests we are hesitant to make any changes. Especially in those instances where we don’t have any real confidence in the code itself, perhaps because we inherited it, because it was rushed. By following a TDD approach we can make changes, for whatever reason, with more confidence.

Enough with looking back, let’s look forward…

We’ll start with the Three Laws of TDD, they are:

  • First Law: You may not write production code until you have written a failing unit test.
  • Second Law: You may not write more of a unit test that is sufficient to fail, and not compiling is failing.
  • Third law: You may not write more production code that is sufficient to pass the currently failing test.

If you follow these rules you get locked into the development cycle mentioned above. Uncle Bob has mentioned in his books and videos that this cycle can be from 30 seconds to a minute and there have been others that have said it could be longer. I find it varies depending on the complexity of a given situation but it is certainly within the definition of short. Essentially it looks like this:

TDD Development Cycle

The above shows a visual representation of the development cycle we get into by following the TDD approach, we:

  1. Write our test
  2. Run our tests
  3. Make code changes
    • Add required production code
    • Refactor code
  4. Run our tests
  5. Repeat

We can see that any failing test means we ‘Make Code Changes’ and any passing test we ‘Write a test’. That’s it, that’s the cycle we get ourselves into and that’s the cycle that gives us that confidence I mentioned before.

By working this way we end up with a massive suite of tests that (providing the tests themselves are of good quality) mean we can refactor, extend, or update our production code we confidence if/when we break something we find out immediately and can fix it.

I’m not going to go into any detail on the tests themselves as I find that is two different topics or at least a topic I want to expand into in the future. So while we can follow TDD it won’t be as effective if our tests are of poor quality.

Finally, from my experience, I have found following a TDD approach to my development activities has given me far more confidence in my code for two reasons. The first is the likelihood my code will work as expected once released. The second is that I can go in and make changes as needed without the concern that I’m touching something fragile and once a change is made the whole system might come crashing down around me. I’d recommend if you don’t use TDD that there are plenty of really good resources out there for you to check out, two I’d recommend are Pluralsight, there are a lot of useful courses available through them and Uncle Bob’s Clean Code book, Chapter 9 is all about unit tests.

Command-Query Separation

I’ve been dropping some short but, hopefully, meaningful posts about subjects within software development like Names and Don’t Repeat Yourself. These appear to be getting some interest so I’m going to look at putting together more short to the point titbits, or tidbits, of recommended software development best practices.

So this week’s titbit is the aim of incorporating the Command-Query Separation principle into our code.

The main goal of Command-Query is that we should aim to write our code in such a way that methods either do something or give us something but that they do NOT do both.

  • Commands: Do something; change the state of the system.
  • Queries: Give us something; give us back a result but do not change the state of the system.

This concept was coined by Bertrand Meyer the author of Object-Oriented Software Construction in the late 1980s and as mentioned above is quite straightforward to understand but not as easy to implement all the time.

Martin Fowler wrote an article in 2005 that provides a good example of where this is the case. If you are using a 3rd party data structure which does not follow this rule and you perform a query on that data structure the state of the system may be changed as a result, thereby breaking the principle. He also continues with this train of thought, we should aim to follow these best practices where we can and when we can but that is not always the case.

My view here is we can either review our implementation, and look to improve it if it’s a break we are causing, or we can accept that these things happen but try to work around the issue so we ensure our code is as clean and expressive as we can make it.

A quote from one of my university lecturers was “Write your code like the person taking it on after you are an axe-wielding murder who knows where you live.”, a bit dramatic but the point is there; we should aim to make our code clean and clear and this kind of decoupling is certainly something we should all be considering.

.Net: C#

Ok, so I’ve talked about the .Net Framework and .Net Core now I think it’s time we have a bit of an introduction to a .Net language, specifically C# (pronounced ‘C Sharp’).

C# is a member of the C family of languages, this includes C, C++, Java, Objective-C, and many others. It is commonly referred to as a general-purpose, type-safe, object-oriented programming language. Let’s quickly break this down:

  • General-Purpose: Means the language has been designed for application in many different domains.
  • Type-Safe: Means behaviours within our programs are well defined.
  • Object-oriented: Means we create our programs around the concept of real-world objects. See this post.

The core syntax for C# is very similar to that of Java but it is not a clone, at least not directly. C# was primarily modelled after Visual Basic (another .Net Language) and C++ (developed by Bjarne Stroustrup.

C#, however, gives us a lot of what other languages do, including overloading, overriding, structs, enums, and delegates (callbacks), but also gives use features more commonly found in functional languages, like F# and LISP, such as anonymous types and lambda expressions. All of which I will be writing about in the future.

The intention of the language was to be a simple yet modern language which provides portability and deployment to different types of environments. With the release of .Net Core, I’d say it is certainly living up to expectations.

The latest stable release of C# is currently at 7.3 as of May 2018 and the latest Preview release is at 8.0. Keep an eye out for a post that gives an overview of the changes provided to us at each release.

I won’t go into too much detail for this post as I want to start writing smaller posts targeting specific parts of the C# language.

.Net: Core

So we’ve looked at the .Net Framework which was limited to Windows, now lets check out .Net Core…

.Net Core was announced by Microsoft in June, 2016 and showed a new path they were creating the .Net ecosystem, and while this new direction was based on C# and the .Net Framework it was quite different.

Two of the biggest changes for the .Net ecosystem was it would be cross-platform and released across Windows, macOS, and Linux at the same time, in contrast to the .Net Framework only ever having been available on Windows. The second change was that .Net Core was going to be Open-Source, that is COMPLETELY Open-Source, meaning the code is available for us to review so we can get a better understanding of what is actually happening. But also the community is encouraged to actually get involved by way of pull requests and the like. This means we actually get our eyes on the Framework and documentation, raise issues such as bug reports, fix bugs, recommend or implement improvements, and add additional functionality. Not bad right?

The origin of .Net Core came from developers requesting that ASP.NET ran on multiple platforms and not just Windows. From there we got ASP.NET 5 and Entity Framework 7, which to be honest was more than a little confusing considering we had .Net Framework 4.6 and Entity Framework 6 you’d expect them to be the next releases right? Not so, so we got .Net Core, ASP.NET Core, and Entity Framework Core to show the distinction between the two ecosystems.

Since the announcement in 2016, we’ve seen the release of .Net Core 1.0, .Net Core 1.1 released in March 2017, .Net Core 2.0 released in August 2017, and .Net Core announced in May 2018 and we’re currently at .Net Core 3 Preview 3 as of March 2019. So we’re making some decent progress and getting more and more functionality with each release.
But what is driving these releases? Microsoft is developing their .Net offering (Core and Framework) inline with a formal specification call the .Net Standard which we are currently at version 2.0. The goal is to have the whole .Net ecosystem establish uniformity. The image below is directly from Microsoft and as they say “a picture paints a thousand words”, which I think makes their goal a bit more obvious.

I haven’t touched on Xamarin because I want to save that for its own post. Check out this post for a very quick intro though.

From the image and the statement above their goal is to have a single library which can be used for the .Net Framework, .Net Core, and Xamarin.

By use of the NuGet Package Manager we can download .Net Standard or third party created packages which contain just the constructs or functionality we need, greatly reducing our production footprint. An example is if you need a JSON parser you could swing by the NuGet Package Manager and add an implementation of your choice, such as JSON.Net to your project. This is a very easy and very quick way for us to develop and we don’t have to find implementations out on the web that may or may not do what we need.

In addition we have also been given Command Line Support, that is we can create, develop, build, test, and publish our applications all from the command line, again all cross-platform. Certainly worth looking into further so keep an eye out for that post if it interests you.
We also get the Universal Windows Platform (UWP), which you can see in the image above, which follows Microsofts write once run anywhere approach. By using UWP we can run the same code on multiple Windows platforms including IOT device (like Raspberry Pi), Mobiles Device running Windows 10 Mobile, PC, Hololens, Surface Devices, and even on Xbox One.

If you checked out the Xamarin post I mentioned above you’ll know Xamarin is a cross-platform development framework, so if we combined our development activities between .Net Core and Xamarin we can get both a huge amount of code reuse and a massive market penetration.

So to round of the .Net Core discussion, we can get a huge amount of code reuse through cross-platform support, we get more visibility of what the framework is doing and increased confidence through community involvement with it being Open-Source, we only use what we need resulting in smaller binaries and dependencies and increased performance, and we have full command line support. Quite exciting…

I hope you have found this post informative or at the very least interesting. I’m looking forward to writing up more .Net related posts in the future.

.NET: Framework

I’m going to start putting together some posts related to the world of .Net (pronounced ‘dot net’). I’m a big fan of .Net and figured it could be useful.

For this post I wanted to target the .Net Framework. So lets get stuck in…

The .NET Framework was formally released in 2002 by Microsoft and is one of the most commonly used methods of Software Development, hopefully something you’re interested in? If not, stick around, check out some of my other posts and see what happens…

The .Net Framework implements the Common Language Infrastructure (CLI) which is an Open Specification developed by Microsoft, with Hewlett-Packard, Intel, and others and the specification was standardised by the International Organisation for Standardization (ISO) and the European Computer Manufacturers Association (ECMA) completing in 2003.
The .Net Framework includes a large Class Library called the Framework Class Library (FCL) and a runtime environment called the Common Language Runtime (CLR).
Applications can be developed using multiple programming languages including C# (pronounced ‘C Sharp’), Visual Basic.NET, and F# (pronounced ‘F Sharp’) with language interoperability. We get a large number of APIs from the framework we can use for example database access using ADO.Net and the Entity Framework, web development including apps and services with ASP.Net, desktop services with the Windows Communication Foundation (WCF), and User Interface Development with Windows Presentation Foundation (WPF). So we can do quite a variety of development through the .Net Framework and to avoid getting side tracked I won’t go into any more detail on the above usage paths in this post, they really deserve their own posts.

Before we could develop .Net Framework applications we primarily used the Component Object Model (COM), at least on Windows, to share code across languages and for cross application communication. Sounds ok right? Well not really, COM was complicated and fragile. So Microsoft quite kindly developed the .Net Framework and it quickly took the place of COM.

Ok so we know .Net is great and all but what is it exactly?
As touched on above it comes in two primary parts the CLR and the FCL.

The runtime environment (CLR) takes a fair bit of work away from us such as finding, loading, and managing .Net objects. It means we don’t really need to worry about memory management (unless we’re using unmanaged code), app hosting, thread coordination, type safety, garbage collection, exception handling, security checks, and other low-level details. By use of Just-In-Time compilation, the runtime converts our managed code (called the Common Intermediate Language (CIL) into machine instructions, called binary, understood by the computer.
As part of the CLR, we get something called the Common Type System (CTS) which is a standard that describes type definitions and programming constructs supported by the runtime. The goal here is to allow our applications to easily share information among themselves, even if those applications are written in different .Net languages.
One more step down we have the Common Language Specification (CLS) and this specification defines a subset of type and constructs. If you end up writing a library where the types are only CLS-compliant this library can be used across all .Net languages.
See the image below for a visual representation…

Common Language Runtime visual

Lets look at the FCL, it is the .Net Frameworks implementation of the standard libraries defined by the Common Language Infrastructure (CLI). The core part of the FCL is the Base Class Library (BCL) as the name suggests this core section contains only the base essentials such as:

  • Threading
  • Security
  • File Input/Output (I/O)
  • Base types; such as Object
  • Collections
  • and more

I’ll put a post up that is just the namespaces and a brief description.

The FCL on a wider scale includes everything and can be used to develop any type of software application from web to console (Xbox).

So that’s a very quick introduction to the .Net Framework, hopefully you found it interesting.

If you have any questions feel free to ask.