ASP.NET Core 1.0’s DNX and MSTest


DNX is .NET’s new cross platform Execution Environment that was developed next to ASP.NET core 1.0 (formerly known as ASP.NET 5)[1]. I came across multiple sources (here for instance [2]) that claimed you can’t write tests for code based on DNX with MSTest. And although I prefer xUnit I felt bad for the MSTest community, so I decided to spend a little time on this and found that MSTest works perfectly fine! You can check out the code on my GitHub [3].

Simple DNX project

I’ve made this example without Visual Studio, just to show better what DNX is. So open your favourite file explorer and make sure at the location of your choice the folders are added as follows:

Figure 1: Folder structure

Make at the root folder “DnxAndMsTest” a “global.json” with the following JSON in it via your favourite text editor:

Figure 2: Global.json

Side note: It’s not required, but I’ve used a “src” folder and a “test” folder for this example to keep the project clean. Via the “projects” property we can tell DNX which folders need to be included. And we can set the minimal required DNX version, which is set on “1.0.0-rc1-update1” for this example.

Then in the “SomeDnxProject” folder add a “project.json”, a “Program.cs” file and a “MessageMaker.cs” file.

Open them in your favourite text editor and add the following JSON for the ” project.json” file:

figure 3: project.json

Add the following code to the “Program.cs” file:

using System;

namespace SomeDnxProject
    public class Program
        public static void Main(string[] args)
			var messageMaker = new MessageMaker();

And the following code to the “MessageMaker.cs” file:

using System;

namespace SomeDnxProject
	public class MessageMaker {
		public string GetMessage(){
			return "Hello from DNX!";

Side note: The “project.json” is required, because DNX uses this file for all its configurations. The “public static void Main” method in “Program.cs” file is required because DNX uses this as its entry point to run the application. “MessageMaker.cs” simply contains a method which we can simply test with MSTest later on.

Then in the “SomeDnxProject.MsTest” folder add the files “project.json”, “Program.cs” and “MessageMakerTests.cs”.

The “project.json” contains the json:

Figure 4: project.json for test project

In the “Program.cs” file you add:

using System;

namespace SomeDnxProject.MsTest
    public class Program
        public static void Main(string[] args)
			//Dummy Main method so DNX stops complaining.

Then add to the “MessageMakerTests.cs” the following code:

using Microsoft.VisualStudio.TestTools.UnitTesting;
using SomeDnxProject;

namespace SomeDnxProject.MSTest
	public class MessageMakerTests
		public MessageMakerTests()

		public void GetMessage_GivenNone_ExpectedDefault()
			var expectedMessage = "Hello from DNX!";
			var messageMaker = new MessageMaker();
			var message = messageMaker.GetMessage();
			Assert.AreEqual(expectedMessage, message);


Go back to the root of the project “DnxAndMsTest” in a console and type “DNU restore” to create the “project.lock.json” files which DNX needs to run. Then navigate to “SomeDnxProject” folder and type “DNX run” to test everything. You should see the following:

Figure 5: DNX run

If not, then you need to check if your installation of DNX is correct and if you followed the above steps correctly. But if you see the above result you can now navigate to the “SomeDnxProject.MsTest” folder and type “DNX test” and see the following result:

Figure 6: DNX passed


The key component is the assembly “MSTest.Runner.Dnx” which can be acquired via NuGet [4]. It is available since December 2015, so it is understandable that is has been missed by others. Especially because with the release of ASP.NET 5 it seemed xUnit was endorsed. But this assembly makes sure your MSTest tests get discovered, also in Visual Studio in the Test Exlorer by the way. I’ve tried it in an ASP.NET Core 1.0 project.


I hope I can make some MSTest users happy. You can work on the cutting edge and still use your beloved MSTest if you want. Make sure you check out my GitHub [3] for the necessary files and if any MSTest user is really happy now, let me know!


  1. DNX overview:
  2. Unit testing ASP.NET Core 1.0:
  3. Code on GitHub:
  4. MSTest.Runner.Dnx:
Posted in .NET programming, ASP.NET Core 1.0, DNX, MSTest | Tagged , , , , | Leave a comment

Tracking code comments with Task List


I discourage the use of code comments (no keep reading please), because they are unmaintainable, don’t force you to write readable code and are often written so poorly that they don’t reveal intent anyway. I’ve written about this a while back [1].

Having said that, sometimes you want to add some code comment for various reasons. So if you’ve depleted your other options (refactoring so the code can be read on itself, XML documentation, etc…) then if you must write code comments, wouldn’t you then prefer trackable code comments?

Task List

In Visual Studio you can track your code comments in the Task List window by using certain tokens in your comment (for instance: “//TODO Water the plants”). If you don’t see your Task List you can open it by clicking View and then Task List in your menu (ctrl + W,T):

Figure 1: View –> Task List (your menu may be different)

Once you’ve clicked this menu item you’ll see the Task List window at the bottom of Visual Studio by default:

Figure 2: Task List window

Side note: Ignore the icon in front of the “UNDONE” token for now.

In figure 2 you can see some of the default tokens I’ve used, namely: “UnresolvedMergeConflict”, “TODO”, “HACK” and “UNDONE”.

Side note: You can change the sort order, reorder columns and show/hide columns. You can also navigate to the next and previous task in the list. [2]


You can view, edit or even add these tokens at the Tools and then the Options menu item. And then at the Task List item under the Environment item in the Options dialog. Per default Visual Studio has the following tokens:

Figure 3: Options dialog

Side note: “UNDONE” is not entirely default in figure 3 which I’ll get back to.

As you can see tokens have unique strings (like “TODO”), but also priorities. This changes the icon in front of a token in the Task List and their order in the list. You’ve got ‘normal’ priority, which does nothing. You’ve got ‘high’ priority, which will display a red exclamation icon in front of the token and will make these types of comments appear first in the Task List. And you have ‘low’ priority, which will display a black downwards arrow in front of the token and make comments of this type appear at the bottom in the Task List.

Side note: The “TODO” token appears to be the only one you can’t mutate. This kind of spoiled my plan to change all the tokens to include the “:”, because this looks much better in your Task List. Oh well…

Custom Tokens

You can also add your own tokens. Just type in a new name at the “Name” textbox and the “Add” button will be enabled. Adjust the priority if you want. Click the “Add” button and then click “Ok” to close the Options dialog. I’ve added “CRIT” (critical) and made the priority ‘high’.

I’ll first show the code listing so far which uses all the tokens (including the custom one):

Figure 4: Complete code listing

Which makes the Task List look like:

Figure 5: Task List with custom token (and a shortcut)

You can see all the tokens at play with on the top the ‘high’ priority tokens “CRIT” and “UnresolvedMergeConflict” and on the bottom the “UNDONE” token, which I’ve set to ‘low’ priority.

But what’s that highlighted line in figure 5? Have you noticed how that’s just a line of code with no token? That is because this is the last capability of the Task List I wanted to show in this blogpost called: “Shortcuts”. I actually didn’t show you everything at figure 4 (so mean of me). If I may direct your attention at the following screenshot:

Figure 6: Shortcut in code

Check out the icon at line 20. That is a shortcut which makes the line of code appear on your Task List. All you have to do is double click on it in your Task List and you’ll jump to this line of code from wherever you are in your solution. You can make shortcuts by placing the cursor on the line of code where you want it and then click on the Edit, Bookmarks, Add Task List Shortcut menu items.

User Tasks

Per Visual Studio 2015 “User Tasks” no longer exist. And in my humble opinion this is a good thing. Let devoted tools like TFS or Jira keep track of your tasks. They do a much better job at it.


  1. XML documentation:
  2. Using the Task List:
Posted in Visual Studio | Tagged , | Leave a comment

Visual Studio Tip of the Week: Block Selection

You can select a whole block of code easily in Visual Studio by holding down the left shift + left alt and then selecting the text with the arrow keys. Imagine you have a variable called “object1” and you would like to change that in object2 for some reason, just select all the 1’s in the aformentioned way and type in a 2.


Posted in Visual Studio | 4 Comments

Unit Testing made easy with Dependency Injection

Synopsis: An example in which we walk through a case in which Dependency Injection made writing automatic tests, and in particular unit tests, a delight. Which also makes the discipline Test Driven Development much more an option. Find the example used for this post on my GitHub: [9]


I have introduced Dependency Injection by giving an example in which we show some late binding [1]. After that I’ve showed a possible route to take if certain dependencies depend themselves on run-time values [2]. And there’s an article about how to apply Aspect Oriented Programming (AOP) by using Dependency Injection’s Interception [3]. Now I’d like to talk about how Inversion of Control (IoC) via Dependency Injection (DI) helps you write better unit tests and eventually apply Test Driven Development (TDD). But before we can do that I feel we should first talk about automated tests itself. Because these are crucial for doing healthy TDD. In particular I noticed some confusion about integration tests and unit tests. So what is the definition of a unit test? What is a good unit test? And why does this difference matter so much?

Side note: If you don’t care about TDD, it is perfectly fine to focus on the fact that writing automated tests (unit tests in particular) is very easy if you apply DI.

Integration Test versus Unit Test

To get a good grip on this I think we should compare integration tests with unit tests.

Integration Test

Most automated tests are actually integration tests. It doesn’t matter if you used MS Test’s Unit Test item template. Whether a test is an integration test or not is actually about what is tested factually. If the ‘unit of work’ under test actually spans a couple of components working together, often including external components like a database, a file, a web service, etc., to test if they all play their part, then it is an integration test. An integration test is allowed to be slow, due to its nature. That is why often they are run during a nightly build or something similar. Figure 1 illustrates connected code components which would have to be tested via an integration test with long running processes like calling a web service, persisting data in the database and writing data to a file on the hard drive:


Figure 1: Integration testable

Unit Test

Tests that define a  clear ‘unit of work’, without any external component involved, and just enough components to test that what must be tested, is an unit test. To emphasize the remark about external components; a unit test can’t extend to persistence layers like databases, or files on disk, or web services somewhere on the internet. And if too much components are involved, the test does too much and can’t be read easily. Unit tests are mainly for teams of developers to keep proving to themselves that the code still works as intended. Components not directly involved in the current test need to be faked (stubbed or mocked). Don’t worry if some terms are not familiar. I am working on an in-depth article about unit testing and TDD, but if you read “The Art of Unit Testing” by Roy Osherove [4] you will know everything you need to know and more! He has for instance an extensive (and much better) definition of a unit test on his website [5]. Figure 2 illustrates with red crosses where the connections in figure 1 would need to be severed to make it unit testable:


Figure 2: Unit testable

So why does this matter?

Unit Tests need to be fast, concise and easy to read. In an Agile Scrum environment for instance where you need to release a potentially shippable increment at the end of every sprint you need to maximize your feedback loops. You want to know on codebase level at every moment if everything is still in order. That is why a Scrum team should strive for automated unit tests that fire off with every source control check in. And that is why they need to be fast. Because if they are slow you don’t want to fire them off with every check in. And because they need to be fast and need to tell you at a glance where something is wrong, they need to be small and concise. Because they are small and concise you will have a lot of tiny unit tests. And the Scrum team will need to maintain them. If unit tests are not readable they will be ignored and poorly maintained, effectively killing off the effect of unit testing. I’ve summarized this in figure 3:


Figure 3: Maximize feedbackloop

Now that I’ve illustrated the difference it is time to move on to actual code!

Unit test without DI

I started adding (as I thought) unit tests to the solution. It has been proven to be a good practice if you make a separate project for your tests and keep them out of your main project [6]. I started out by using the Unit Test Project template. All this mainly does is add the correct assemblies to start using MS Test basically. Plus the tests show up in Visual Studio’s Test Explorer out of the box. For this example however I’ve used XUnit instead of MS Test (if you are interested in my reasons why you can check out a post at Mark Seemann’s blog [7]). The tests and classes from figure 1 would look like:




Listing 1: First attempt at SomeService and its unit test

So what are the problems with the unit test above? The test takes too long. And it needs to assert too much. If you add up the milliseconds you’ll be looking at 8000 milliseconds in total. Of course fictive and a bit exaggerated for the purpose of this example, but even half of that would be way too much for a simple unit test. This is caused by calls to external components which simply take longer processing time. So as figure 2 showed us we need to sever the connections (read: Dependencies) and actually not write something to the database, or call a web service, or write a log file, if we simply want to unit test DoStuff. To be able to do this we need to stop initializing everything in the DoStuff method. That is where Dependency Injection is going to help us. Listing 2 shows our second attempt with ImprovedSomeService:


Listing 2: ImprovedSomeService

And the unit test could be something along the lines of:


Listing 3: Second attempt at unit test with ImprovedSomeService

As you can see in listing 3 Dependency Injection makes it very easy to fake all the components that are not needed in the ‘unit of work’ at hand. In this example the free library FakeItEasy [8] is used to create mocks and stubs. The unit test begins by making some fakes (FakeItEasy makes syntactically everything a fake and the context decides whether it is used as a mock or a stub). These fakes are injected into the service via its constructor. Then the test simply proceeds by asserting if some calls to certain methods actually happened and whether the return value is true from the method DoStuff. This test is focused and concise, plus the time this test takes is not even worth mentioning.


It should be easy to see via listing 3 how Dependency Injection can make your pain with TDD a little lighter. Let’s throw the familiar diagram on here to provide some context:

Figure 4: TDD cycle

As you know you write the test first, so imagine we had the requirement to ‘do’ some ‘stuff’. I write the test first, so I try to call a DoStuff method which didn’t exist. This is the red state. I generate the method. The test becomes green. I refactor this by making SomeService class, putting the DoStuff method in there and initializing SomeService to call DoStuff on it in the test. And you repeat the process by wanting to know in the test if DoStuff failed or succeeded. The test is red. DoStuff needs a Boolean return value and we return true in the method. Test is green. During refactoring we could for instance only return true if some operation actually succeeded. This operation could for instance be putting stuff in the database. So in the DoStuff method of SomeService I call WriteStuffToDb on the non-existent class SomeRepository. Making the test become red again. I generate the class and the method. I am not interested right now in the actual implementation of the call, so to make the test green I fake the class and test if the method is called at all. Then I refactor this so the implementation of ISomeRepository is injected through the constructor making it easy for the next iterations to insert dummy repositories. And so on…


I showed you the difference between integration tests and unit tests. Then I showed you how Dependency Injection makes unit testing a lot easier. Finally I brought some TDD into the mix. You can find all the code and more on my GitHub: [9]. This article is also published in Dutch magazine #127 called SDN which you can download here: [10]. I recommend that you do because Pepijn Sitter [11] wrote a cool article about Roslyn’s Analyzers, Gerald Versluis[12] about Xamarin.Forms, and so on. So go check it out.

Side note: Used Ninject as IoC Container because the API is awesome!


  1. Introduction Dependency Injection:
  2. Dependency Injection based on run-time values:
  3. AOP with Interception:
  4. Art of Unit Testing:
  5. Definition of a Unit Test:
  6. Great discussion about seperate projects for tests:
  7. Reasons for XUnit:
  8. FakeItEasy:
  9. GitHub:
  10. SDN 127:
  11. Pepijn Sitter:
  12. Gerald Versluis:
Posted in .NET programming, Agile Scrum, Dependency Injection, Software design patterns | Tagged , , , , | 2 Comments

Increase productivity with XML documentation


This post was originally a guest blog which you can find: here. Let’s talk about commenting code for a second. I was triggered by this rant to examine how we actually do this on our team. And as I reviewed our codebases I quickly discovered that almost each and every one of us has a totally different style of commenting. I suggested during a Sprint Retrospective to the team to not only comment our code in the same way, but also in the same manner. In an Agile Scrum environment where seniors leave the team and juniors enter the team we all could agree that this was going to help our code become more readable. To help us getting started on this I introduced the XML Documentation Comments: A style of commenting code I am in to. I’ll first briefly explain what it is and then I’ll explain why I prefer XML documentation over inline comments.

What is XML documentation?

Adding XML documentation is like adding annotations above classes and methods. Except they are not metadata and not included in the compiled assembly (and therefore not accessible through reflection). You can write XML documentation by typing triple slashes directly above the line with the class or method declaration. The Visual Studio IDE will add the correct and commonly used tags. So above a class it looks like:
2015-10-12 16_15_41-XMLTagAboveClass
Figure 1: XML documentation above class

Side note: I am showing wrong comments (as in not revealing intent) on purpose. We’ll get back to that later.

And above a method as follows:
2015-10-12 16_15_41-XMLTagAboveMethod
Figure 2: XML documentation above method

Side note: Notice how the tag ‘param’ is added automatically with the attribute name and the correct parameter name. This is monitored by the IDE and will produce a warning if it’s not correct, so very maintainable.

Above a method with a return type:
2015-10-16 11_18_39-XMLTagAboveMethodWithReturnType
Figure 3: XML documentation above method with return type

Side note: Please note the escape characters around ‘IAgenda’.

The above examples are automatically generated and therefor commonly the most used. If you want to play around with other tags which are recommended for use you can follow this link:

Why XML Documentation over inline comments?

So why do I prefer XML documentation over inline comments? Well, there are several reasons:

  • Forces you to think about your comments
  • Maintainable
  • Provides IntelliSense
  • Automatically builds API documentation

I’ll explain them in order:

Forces you to think about your comments

As I’ve said in the side note beneath figure 1 I have intentionally made some horrible comments. Have you ever come across something like:

//Gets a new Foo
public static IFoo GetNewFoo(ICustomer customer){
//New up a Foo
IFoo foo = new Foo();
//If IsPrefered = true then add collection of Bars
foo.Bars = GetBars();
//Return new Foo
return new Foo();

What do these comments say? Absolutely nothing more than the code that is written. What should comments do? They should reveal the business intent behind code. They should tell the new guy that has to clean up your mess why the code was written the way it was back then. Let’s try this with more business intent in the mix:

/// <summary>
/// Gets a new Foo for the customer. If the customer is a preferred customer they are ///entitled to a collection of Bars.
/// </summary>
/// <returns>A new instance which adheres to the IFoo interface, which contains a ///collection of Bars depending on whether or not the customer has the preferred status.</returns>
public static IFoo GetNewFoo(ICustomer customer){
IFoo foo = new Foo();
foo.Bars = GetBars();
return new Foo();

This example is terribly contrived, but I hope I can make clear what I mean with comments that portray business intent. And I’ve experienced that thinking about good XML Documentation forces you to think more about the comments you write down.


XML Documentation is checked to some degree by the Visual Studio IDE, generating warnings where needed. Inline comments are not checked at all, and how could they be checked? There’s no structure.

Provides IntelliSense

How nice is it to tap the period key, focus on the properties/methods and see a clear and concise explanation? Fantastic right? Imagine having a college seeing the IntelliSense you provided and understand what it does. Awesome right? So to give a concrete example, the IntelliSense for the method in figure 3 would be:
2015-10-16 15_18_48-XMLDoc_IntelliSense

Automatically builds API documentation

You’ve read that correctly. There’s a free open source tool out there which is called Sandcastle ( which generates your API documentation based on these XML Documentations. Hook Sandcastle into your Build Definition on TFS and have it generate API documentation as part of your Continuous Integration pipeline!


I hope after reading my blogpost I’ve made you enthusiastic about XML Documentation as well. There are some nice benefits to be had, so drop this in your team the next Sprint Retrospective and see how it will land.

Posted in .NET programming, Visual Studio | Tagged , | 1 Comment

Real World Example of Adding Auditing With Dependency Injection’s Interception

Synopsis: This article walks through a real world example in which the Crosscutting Concern (1) ‘Auditing’ was added to the application without breaking the Single Responsibility Principle (2), nor the Open/Closed Principle (3), by utilizing Dependency Injection’s Interception. Get the example from my GitHub: . If you can read Dutch you can also download the SDN magazine (Microsoft Azure/Xamarin) in which this article is featured:


For this article I assume you have some basic knowledge of Depency Injection. But if you don’t, not to worry! I have introduced Dependency Injection (4) and showed a possible route to take when the dependencies depend on run-time values (5). Common scenario: After a few months of developing the core domains of an application ‘auditing’ capability needs to be added. Adding Crosscutting Concerns like auditing to an existing application can be quite a hassle if the architecture hasn’t taken it into account from the ground up. However, taking all Crosscutting Concerns into account while ‘growing’ the architecture is not beneficial in an Agile Scrum environment, in which you are supposed to deliver potentially shippable increments every sprint. Enter the domain of Aspect-oriented Programming (6). AOP promotes a loosely coupled design with which you can mitigate these problems. Add in an Inversion of Control (7) framework that supports AOP by enabling Interception (like Unity) and Crosscutting Concerns become a breeze. Let me show you how I added auditing and let us scratch the surface of Interception.


As this particular application deals with patients and their medical data it is important to log which user did what, when and where. Next to a mutation database on the medical data (which is a core domain due to the nature of the application) I also wanted to log things more in detail, like: User A read prescription X of patient Y at July 23th 2015 8:40 am. Or User B authorized contact moment #11 of patient B created by User A at July 23th 2015 8:42 am. This is not a concern the first few sprints of development. But at a certain point you’ll want to add this auditing capability and you’ll have to crack open several classes to insert said functionality, violating the Open/Closed principle. And the Single Responsibility Principle, because now the classes are apparently also responsible for auditing? We’ll get to the solution for this problem with Dependency Injection and Interception soon. But to really understand what is going on, let’s solve this problem by taking a look at the Decorator Pattern (8) with poor man’s Dependency Injection first.

Sidenote: This blogpost does not address the Decorator Pattern with inheritance due to its drawbacks, which is a subject for another article perhaps.

Decorator Pattern

Figuur 1 Decorator Pattern in actie
Figure 1: Decorator Pattern in action!

We have a class named PrescriptionService which implements interface IPrescriptionService. The interface consists out of only one method: GetPrescriptionByID. GetPrescriptionByID is implemented as follows:

Console.WriteLine("{0} is called!", nameof(GetPrescriptionByID));
IPrescription prescription = Program.Container.Resolve();
prescription.ID = ID;
switch (ID)
case 1:
prescription.PatientID = 1;
prescription.MedicationName = "Aspirin";
prescription.Dosage = "2 tablets each day";
case 2:
prescription.PatientID = 1;
prescription.MedicationName = "Unisom";
prescription.Dosage = "1 tablet each day";
case 3:
prescription.PatientID = 2;
prescription.MedicationName = "Dulcolax";
prescription.Dosage = "2 tablets every other day";
case 4:
prescription.PatientID = 3;
prescription.MedicationName = "Travatan";
prescription.Dosage = "3 drops each day";
case 5:
prescription.PatientID = 4;
prescription.MedicationName = "Canesten";
prescription.Dosage = "Apply 6 times each day";
throw new ArgumentException(String.Format("{0} has unknown value", nameof(ID)));
return prescription;

Listing 1: GetPrescriptionByID

But after GetPrescriptionByID is called we want to log this event. So I’ve added AuditingPrescriptionService whose sole purpose it is to log this call. It wraps the original PrescriptionService by demanding an instance of any class that implements IPrescriptionService in its constructor (Constructor Injection). And it needs to know the user for my Auditing requirements, so demands that too in its constructor. Like so:

public AuditingPrescriptionService(IPrescriptionService service, int userID)
if(service == null)
throw new ArgumentNullException(nameof(service));
if(userID <= 0)
throw new ArgumentOutOfRangeException(nameof(userID));
this.Service = service;
this.UserID = userID;

Listing 2: Constructor AuditingPrescriptionService

And the method GetPrescriptionByID is augmented in AuditingPrescriptionService:

IPrescription prescription = Service.GetPrescriptionByID(ID);
//TODO write to log that user ? read prescription ? of patient ? at a certain date and time
return prescription;

Listing 3: Augmented GetPrescriptionByID

Let us not worry about the actual implementation right now. What is important to note however is that with this approach you will have to write a decorator for everything you need to wrap. And what if you have more than one Crosscutting Concern to implement, like caching, or authorizing? The number of Decorator classes will grow exponentially. Lots of little objects will have to be created to implement all Crosscutting Concerns creating a potential maintenance nightmare. The solution as said before is Interception, but let’s take a brief look at how this problem could be solved with an AOP framework as well, to fully appreciate Interception.

AOP Framework

There are lots of fantastic AOP frameworks out there for .NET like LOOM.NET, Aspect.NET and Spring.NET supports it now too. But PostSharp must be one of the most popular frameworks at the moment of writing this. There are loads of aspects to choose from in PostSharp and it can be as easy as moving the caret to the name of the class or method you want to enhance and choosing the pattern from the light bulb (smart tag in versions previous to Visual Studio 2015). If we want to add logging to a method all you have to do is follow these simple steps (9). It is like magic right? Why wouldn’t I want to use an AOP framework?

There are a few reasons in order of importance to me:

  1. It’s tightly coupled to the AOP framework’s implementation of logging, even if you have choice.
    Due to the nature of of the branche I work for the operational environment has some restrictions which is why I have to build specialized logging.
  2. If you’ve followed the steps from the link above to add logging you have seen that the attribute [Log] is added on top of a method. Without settings it’ll be default logging. So if I don’t want default behaviour anymore I need to adjust the logging attribute. I’ll have to change all the [Log] attributes to be clear on this.
    I can see people’s minds change a lot of times in the nearby future about logging (often pushed from our government), so I need something more flexible.
  3. You can’t late bind these aspects through the configuration file since these aspects are compiled into the assemblies.
    An extension to points one and two in a way. Some operational environments will require another behaviour from the logging than other environments. If I can’t tell at run time which logging aspect I need then I won’t be able to fulfil this requirement.
  4. Complexity of unit tests increases by using an AOP framework
    I am a big fan of TDD and while Dependency Injection actually caters to unit testing from the ground up, AOP framework impedes unit testing to a point where your tests become harder to read (and people feel the need to write documentation about how to actually go about doing it: (10)).

SideNote: What is important to mention here is that I am not against AOP frameworks. On the contrary, I don’t understand the whole DI versus AOP topics out there, since I think they complement each other in certain architectures. If you are interested in this kind of information I can recommend a good blogpost by Kenneth Truyers: (11) (keep in mind he favours DI in this post though). And PostSharp for instance has the SkipPostSharp flag which you can apply in your testproject.

These problems sketched above can easily be solved with an IoC Container’s ability to dynamically apply Interception.

Dependency Injection and Interception

Interception looks conceptually like this:
Figuur 2 Concept of Interception
Figure 2: Concept of Interception

Some client calls method GetPrescriptionByID of the PrescriptionService class and the call gets ‘intercepted’ by some Logging behaviour first, hence the name. In the above figure for illustrative purposes I have drawn a ‘chain’ of Interceptions (and the funny thing is that the arrows look like a chain this way). All three very common Crosscutting Concerns which you can chain together. So in this case the retrieved Prescription gets cached for a period of time apparently, but not before there is a check if the user is authorized to see prescriptions at all. And the call gets logged no matter what. Boy, I really wish I could add these behaviours (or ‘aspects’) dynamically later on in development. But I can! Just need a little help of Unity.

Interception with Unity

There are a few simple steps:

  1. Make sure you get Unity.Interception from NuGet
  2. Add AddNewExtension<Interception>(); in the Composition Root
  3. Add a class named LoggingInterceptionBehavior (the ‘InterceptionBehavior’ part is a convention you don’t necessarily need to abide to)
  4. Add usingPractices.Unity.InterceptionExtension; on top of the file
  5. Have the class implement interface IInterceptionBehavior (that is how Unity actually understands this class can be used to intercept calls)

The interface has two methods and a property you need to implement:

public interface IInterceptionBehavior
bool WillExecute { get; }
IEnumerable<Type> GetRequiredInterfaces();
IMethodReturn Invoke(IMethodInvocation input, GetNextInterceptionBehaviorDelegate getNext);

Listing 4: Interface IInterceptionBehavior

The property WillExecute is to determine if it makes sense to have the call be intercepted by this behavior. If it doesn’t make sense you can have this property be set to false and you won’t have the unnecessary overhead of the proxy or intercepting class generation. I want my logging behaviour to always run, so I will simply let WillExecute return True (get { return true; }).

The method GetRequiredInterfaces is basically there to pre-set which types you would like to intercept with this behaviour. But I like to do that in the Composition Root (and so should you), so I just let this method return an empty array of type Type (return Type.EmptyTypes;).

The Invoke method is where the real work gets done. The parameter ‘input’ represents the call to a method (12) and getNext is the parameter which actually contains a delegate which can be invoked to get the next delegate to call, to keep the chain going (13). For our AuditingInterceptionBehavior I have implemented it as follows:

//The logged in user happens to be Danny all the time. ;-)
string identity = "Danny";
//Before: You can write a message to the log before the next behaviour in the chain/intended target gets called.
"{0}: User {1} {2}. Technical details: Invoked method {3}",
(input.MethodBase.GetCustomAttributes(typeof(DescriptionAttribute), false).FirstOrDefault() as DescriptionAttribute)?.Description,
//Actual call to the next behaviour in the chain or the intended target.
var result = getNext()(input, getNext);
//After: And you can write a message to the log after the call returns.
if (result.Exception != null)
//You can for instance write to the log if an exception has occurred.
"{0}: Method {1} threw exception: {2}",
//Or you can write more useful information.
"{0} User {1} {2}: {3}. Technical details: Returned from method {4}",
(input.MethodBase.GetCustomAttributes(typeof(DescriptionAttribute), false).FirstOrDefault() as DescriptionAttribute)?.Description,
return result;

Listing 5: Implementation of the Invoke method

Side note: I decorated the GetPrescriptionByID in the interface with a Description attribute and I have overridden the ToString method in Prescription so the message is easier to read for display purposes.

The actual implementation is not important.
The important part is in the middle: var result = getNext()(input, getNext); this actually keeps the chain going. By invoking the delegate GetNext() a InvokeInterceptionBehaviorDelegate is returned with which you can either call Invoke on the next behaviour or invoke the intended target depending on the chain.
The other important bit is: return result; which keeps the chain going back up by returning the IMethodReturn instance (13). It contains the intended return value in its property ReturnValue which is used in listing 4. Another useful property is used in listing 4, which is Exception, so you can handle exceptions in the chain. WriteLog is actually a simple call to the Console (Console.WriteLine(message);) as this example is a Console Application, but it could be a call to your auditing library of choice of course.

Now in the Main method of the Program class, which we’ll use as the Composition Root, the following lines of code are needed:

Container.RegisterType<IPrescription, Prescription>();
Container.RegisterType<IPrescriptionService, PrescriptionService>(
new Interceptor(),
new InterceptionBehavior());
IPrescriptionService service = Container.Resolve();
IPrescription prescription = service.GetPrescriptionByID(1);
Console.WriteLine("Retrieved: {0}", prescription);

Listing 6: Main method of class Program

Let’s have a brief explanation about the Interceptor types. As you can see at “new Interceptor” we are using the “InterfaceInterceptor”, because the class only implements one interface. Always try to use this type of intercepting for performance reasons. Only use TransparantProxyInterceptor if your class implements more than one interface (or no interface at all) and you’d like to add behaviour to all implemented methods. The aforementioned intercept types work by dynamically creating a proxy object which is not type compatible with the target object. Try it. If you write for instance “?service.GetType()” in the Immediate Window you’ll get something like “DynamicModule.ns.Wrapped_IPrescriptionService_83729647ce664f89b08534edc98a7858” as FullName. If you want more control over your behaviours and you need to intercept internal calls in the class (or abstract classes) you’d need VirtualMethodInterceptor. But because it works by extending the type of the target object you’ll need to make your methods virtual, the class public and it cannot be used on existing objects. Only by configuring the type interception. The chain of behaviours is established by overriding the virtual methods in the base type.

For this example InterfaceInterceptor was enough. And in conclusion Listing 5 will output:
Figuur 3 Output
Figure 3: Output

As you can see in figure 3 “GetPrescriptionID is called!” is right in between the two messages from the AuditingInterceptionBehavior thus showing that this behaviour was added dynamically in the call chain.

Question: A question I got specifically about this example is what you should do if you need a finer detail of Auditing and you need to log messages within the method?
If you feel the need to do this I sincerely implore you to look critically at your design. Are your methods not too large and doing too much? And then change the design accordingly, so you get Auditing exactly where you need it.


Thank you for reading my article. I would like to leave you with the following thoughts. First of all, we’ve scratched the surface here of Interception, so be prepared to learn more along the way. Second of all, Dependency Injection’s Interception is not the silver bullet to Crosscutting Concerns as AOP frameworks are not either. Don’t be afraid to use what suits your needs the most in your current architecture. If you can read Dutch you can also download the SDN magazine (Microsoft Azure/Xamarin) in which this article is featured: Now go have fun with my example on GitHub and add some behaviours:


  1. Crosscutting Concern:
  2. SRP:
  3. Open/closed principle:
  4. Intro DI:
  5. DI and run-time values:
  6. AOP:
  7. IoC:
  8. Decorator Pattern:
  9. PostSharp’s simple steps:
  10. PostSharp testing:
  11. DI vs. AOP:
  12. IMethodInvocation:!/Packages/Unity.Interception/Microsoft.Practices.Unity.Interception/IMethodInvocation
  13. GetNextInterceptionBehaviorDelegate:!/Packages/Unity.Interception/Microsoft.Practices.Unity.Interception/GetNextInterceptionBehaviorDelegate
  14. IMethodReturn:!/Packages/Unity.Interception/Microsoft.Practices.Unity.Interception/IMethodReturn
Posted in .NET programming, Dependency Injection, Software design patterns | Tagged , , | 3 Comments

New Job + published in magazine!

To whom it may concern.

It has been difficult to achieve my target of 1 blogpost per 2 weeks because of various reasons. Luckily these reasons are all… Really awesome! So I thought I’d share it with whoever is interested. Here we go:

  1. I have been very busy with arranging things for… Dramatic pause… My new job! Per 1 November I am going to work at Sound of Data ( who are responsible for complex software like mass voting systems of, for instance, The Voice (of Holland)/Junior Eurovision Song Festival . For this job I need to dive in to the world of distributed systems with messaging architectures (like MSMQ/NServiceBus) and similar technologies. I am really excited, but really busy right now because of this.
  2. I finished an article about Dependency Injection and Interception (AOP) just before my holiday, and as I was ready to post it I had the opportunity to have it printed in a magazine first! How awesome is that? This means however that I am allowed to post it online after the magazine has been released (which I will do in a week). This also means I didn’t have time to wrap up a new post for my bi-weekly update. So again, a magazine is going to publish my article! If you can read Dutch you can find this magazine here:
  3. I have been on vacation in August. If my wife would’ve seen me writing about work-related stuff she would’ve ripped me a new one! So, couldn’t do anything but relax… How sad for me. 😉

So I hope I will be able to frequently publish a post again. I am going to try. The next post should be about how Dependency Injection makes your Unit Testing life easy (with XUnit/NSubstitute), and after that I want to do one about lifetime management with IoC Containers (like Unity). At least you know what is keeping me busy right now, thank you for your continued interest.

Posted in Uncategorized | 4 Comments