ASP.NET Core 1.0: Goodbye HTML helpers and hello TagHelpers!

Synopsis: ASP.NET Core 1.0 [MVC 6] comes with a new exciting feature called TagHelpers. Read on to see why I think we can kiss HTML helpers goodbye. Find the accompanying sourcecode on my GitHub [4]. Please note that MVC 6 will be mentioned in square brackets, because at the moment of writing this it was still called that way, but it is all simply called ASP.Net Core 1.0 now. I first wrote this article for SDN magazine #128[8].

What are TagHelpers?

TagHelpers can be seen as the evolution of HTML helpers which were introduced with the launch of the first MVC framework. To provide context you have to imagine that with classic ASP the only way you could automate the generation of HTML is via custom subroutines. After that ASP.NET came with server controls, with view states as biggest plus, to simulate the look and feel of desktop applications and help with the transition for desktop developers. But we all know what happens when we try to jam squares in to round holes. We had to face the fact that web development is nothing like desktop development. To get in line with proper web development the ASP.NET MVC framework was launched with HTML helpers to automate the HTML output. But HTML helpers never really gelled, especially not with front end developers and designers. One of the main pet peeves was that it made you switch a lot from angle brackets (HTML, CSS) to C# (Razor syntax) during work on views, which made the experience unnecessarily uncomfortable. [MVC 6] wants to address this and some smaller issues by introducing TagHelpers. More on this here [1]. On the view side they totally behave like HTML tags. And this reduces context switching. The easiest way to introduce a TagHelper is to look at the one for the anchor tag. With the HTML helper this would’ve been done as:

@Html.ActionLink(”Home”, ”Index”, ”Home”)

With the anchor TagHelper this would look like:

taghelpers_2
Side note: Please note that asp- is just a convention, but more on that later.

The output rendered in the browser is the same for both:

taghelpers_2
Side note: Provided the default route has not been altered.

This should illustrate that a designer for instance is much better acquainted with the TagHelper syntax as it handles just like plain HTML, as opposed to the HTML helper which is almost like calling a method in C#.

The hyperlink TagHelper is not the only one provided by [MVC 6]. Let’s have a quick run through and start off by how to actually get the TagHelpers in your project.

Common TagHelpers

[MVC 6] comes with a bag of TagHelpers and to add them to your project you need to add the following line to the dependencies section of the project.json file:

”Microsoft.AspNet.Mvc.TagHelpers”: ”6.0.0-rc1-final”

Side note:When you save the file Visual Studio should automatically download and install the appropriate NuGet package(s), but if it doesn’t then you can also right click on the project and choose “restore packages” from the context menu or use the command “dnu restore” in the developer’s command prompt from the folder where the solution file resides.

Then you need to make the view aware that you would like to enable TagHelpers. This is an explicit directive, which is the way you want it! Just add this line of code to the top of your view:

@addTagHelper ”*, Microsoft.AspNet.Mvc.TagHelpers”

Notice how the directive makes use of the glob notation [2]. The first parameter is the name (or names) of the TagHelper(s) you want to use. By specifying the wildcard (*) you instruct the framework to add all TagHelpers. The second parameter is the assembly name. So here we give the instruction to add all TagHelpers from the Microsoft.AspNet.Mvc.TagHelpers assembly.

Side note: You could also add this directive to a new Razor file introduced with ASP.NET 5 called _ViewImports.cshtml. Its main purpose is to provide namespaces which all other views can use. Similar to what the web.config file in the Views folder used to do. You can add this file to any view folder as to have finer granular control over which views have access to what exactly. And it’s additive, pretty convenient! You can read more on this file here [6].

Once this line is added to your view you’ll have access to the following predefined TagHelpers:

  • Anchor
  • Cache
  • Environment
  • Form
  • Input
  • Label
  • Link
  • Option
  • Script
  • Select
  • TextArea
  • ValidationMessage
  • ValidationSummary

Side note: If you don’t see TagHelpers in your IntelliSense you need to add a reference to “Microsoft.AspNet.Tooling.Razor”: “1.0.0-rc1-final” in your project.json file.

Let’s briefly run through these Tag Helpers.

Anchor

Well we’ve seen this at its simplest form already:

taghelpers_2

Just the “asp-controller” and “asp-action” attributes. But the anchor TagHelper has much more to offer.

  • You can add route parameters by using the “asp-route-” prefix (example: asp-route-id = @ViewBag.PatientId).
  • You can use a named route with the “asp-route” attribute
  • You can force protocols like https by using for example: asp-protocol=”https” (and if you want to do this for a certain domain you use “asp-host”)
  • You can link to a specific part of your page by using “asp-fragment”

We’ll need a follow up to talk about the anchor Tag Helper separately.

Cache

Can be a useful Tag Helper for improving your performance by storing cache entries in local memory. This means that anything that will stop the host process will cause the cache entries to be lost. This is something to keep in mind when for instance your Azure instances get scaled. A scale down means losing the cache.

But if you bear this in mind it simply works by wrapping any portion of Razor View you want and storing that content in an instance of IMemoryCache. The simplest form looks like:

taghelpers_2

But you can do a lot more cool stuff with this, like having it vary by user, or by route. I’ll treat the cache Tag Helper in detail another time.

Form

The form Tag Helper reads much easier than the form HTML helper and looks at its most basic like:

taghelpers_2

The output would be:

taghelpers_2

Side note: As you can see the Tag Helper automatically renders an input element to create the anti-forgery token [7].

There are more options, but I’ll write about that separately.

Input

The input Tag Helper is one of the form elements and could be used to replace the HTML helper EditorFor. Imagine the following class:

	public class SomeModel
	{
		public string SomeString { get; set; }
	}

The input Tag Helper looks like:

taghelpers_2

This renders as:

taghelpers_2

The input Tag Helper will be described further another time.

Label

The label TagHelper is one of the form elements and could be used to replace the HTML helper LabelFor. Imagine the following class:

		public class SomeModel
	{
		[Display(Name=”Type something here”)]
		public string SomeString { get; set; }
	}

The label Tag Helper looks like:

taghelpers_2

This renders as:

taghelpers_2

Notice how the label Tag Helper is capable of automatically grabbing the value of the Name property of the Display attribute. And that is all there is to the label Tag Helper really.

Link, Script and Environment

The link and script Tag Helper also support globbing[2]. An interesting example could be:

taghelpers_2

This tells via globbing and the ‘asp-src-include’ Tag Helper attribute to include all ‘.js’ files in the ‘js’ of ‘wwwroot’. I’ve thrown ‘bootstrap.js’, ‘jquery.js’ and ‘npm.js’ in the ‘js’ folder at ‘wwwroot’ and therefor it produces the following output:

taghelpers_2

Pretty convenient right? You can also exclude files, reference files from hosted CDN and provide a fallback, and much more. You can even apply cache busting via the ‘asp-append-version’ attribute. More on these Tag Helpers in a seperate article.

Side note: Actually, when you use it exactly like the example above you’ll get an error, because bootstrap needs jQuery. You’ll want to reference hosted CDNs and provide fall-backs, for example:

taghelpers_2

But as I’ve written above, this deserves a separate article.

Let’s focus on the environment Tag Helper now instead. This Tag Helper works really well with the script and link Tag Helpers and is therefore included in this paragraph. With ASP.NET Core 1.0 you can set the environment to different stages: Development, Staging, and Production. And with the environment Tag Helper we could include all the unminified .js files in development, while using the minified versions in staging/production. Like this:

taghelpers_2

I’ve seen it most commonly used like this and I currently only use this Tag Helper for this reason.

Select and Option

A select is not really useful without options, so therefor they are described together in this section. A select will render as a drop down list and is the equivalent of the HTML helper ‘DropDownListFor’. So we have ‘SomeModel’ with ‘SomeString’ from our example. And in the ‘Index’ method of the ‘HomeController’ we add some options to a ViewBag property, like this:

taghelpers_2

You could use this Tag Helper as follows:

taghelpers_2

Which would render to the browser as:

taghelpers_2

And if the property at the asp-for is of type IEnumerable, then the Tag Helper would automatically render a multi-selectable dropdown.

TextArea

The text area Tag Helper is the replacement of the HTML helper ‘TextAreaFor’ and basically works the same as the input Tag Helper. To show how it works I’ve expended our ‘SomeModel’ with the property ‘SomeLargeString’ and a ‘MaxLength’ attribute:

taghelpers_2

You simply write the tag in HTML as follows:

taghelpers_2

This outputs the following HTML to the browser:

taghelpers_2

It plays along nicely again with the attributes like ‘MaxLength’, ‘Required’, etc., providing adequate attributes in HTML as can be seen in the rendered HTML above.

ValidationMessage and ValidationSummary

The textarea Tag Helper is capable of providing its own validation attributes, like ‘data-val-maxlength-max’, but it is also possible to explicitly set the validation with the validation message TagHelper:

taghelpers_2

SomeString could have the ‘Required’ attribute, like so:

taghelpers_2

The span will actually render to the browser as:

taghelpers_2

Don’t forget to change the post method in the HomeController:

taghelpers_2

Just make a simple Succes.cshtml for when the state of the model is valid. And when the required SomeString is not provided on the post the same page will return with the span rendered as:

taghelpers_2

But most of the time is prettier to summarize all validation messages and that is where the validation summary Tag Helper comes in:

taghelpers_2

You simply add the ‘asp-validation-summary’ to a div tag and it works. The attribute requires a ValidationSummary enumeration which could be ‘All’, ‘ModelOnly’ or ‘None’. Why you would ever want to choose ‘None’ is beyond me, just don’t add a validation summary then! But the ‘ModelOnly’ is to only include validations on model level and ‘All’ is to include both model level and property level. It will render as:

taghelpers_2

And when SomeProperty is not provided while it has the ‘Required’ attribute it’ll render as:

taghelpers_2

These were the provided TagHelpers by [MVC 6], now let’s see how you can create one yourself!

Custom TagHelpers

You can expand existing HTML elements with a TagHelper or you can make your own TagHelper completely from scratch. This practice is called ‘authoring TagHelpers’ [3]. The first kind will function as simply another attribute on the HTML element like we saw with the HTML Helpers ‘asp-action’ and ‘asp-helper’ at the anchor tag. The process of making your own TagHelper is very similar in both cases. Let’s first create a TagHelper that you can use as attributes.

Attributes

As a nice short example we can have a span tag that you could provide a message and a level for how big you want that message displayed, like this:

taghelpers_2

To make this HTML work I wrote the following code which we’ll walk through:

taghelpers_2

First of all you need to derive from the class TagHelper to make a TagHelper (you could theoretically also implement ITagHelper).
With the HtmlTargetElement attribute above the MessageTagHelper class you can specify to which HTML element(s) this TagHelper applies to. The Attributes property is a comma separated list with which you can make certain attributes required. I wanted the MessageValue property required as can be seen in the code above. You can decorate the properties with the attribute HtmlAttributeName to give them a clear name in HTML.

Side note: This attribute is not required as the property’s name in camel case will be converted to ‘lower kebab case’ as a convention (so for example: “message-value”). But in this example we adhere to the ‘asp-‘ convention for TagHelpers without messing up the names of the properties.

Now you need to override either the ‘Process’ method or the ‘ProcessAsync’ method if you expect to handle large files or something. This is an extremely simple TagHelper so the sequential variant will be more than sufficient.
You get passed along the TagHelperContext and the TagHelperOutput parameter. The TagHelperContext contains all kinds of information about the element this TagHelper targets. So in our case the concerning span element. I didn’t need the parameter in this example, but you’ll probably need it as the TagHelper becomes more complex. The TagHelperOutput parameter is used all the time as it is used to transform, append and remove the HTML.

The TagName is changed to “div” (to be honest this is just because the h tags are block elements and the span tag is an inline element). The TagMode is set to StartTagAndEndTag to enable you to use the span in HTML as a self-closing element. Then you see some code to deal with the given level, nothing fancy. At the end you can see the content appended to the output’s ‘Content’. You also have ‘PreContent’ and ‘PostContent’, which I didn’t need for this simple example. But exploring the TagHelperOutput parameter is really going to help you make complex TagHelpers.

When we run this the actual HTML rendered will be:

taghelpers_2

As have been said you can also create HTML elements completely from scratch.

Elements

You can also create entire new HTML tags. For this example we’ll have a look at Bootstrap styled panels:

taghelpers_2

It would be better to be able to write:

taghelpers_2

It is more in line with HTML5’s semantic way of coding the layout. It reads better. So let’s make it happen. All we need is the following code really:

taghelpers_2
taghelpers_2

Let’s go through the code and start at PanelTagHelper. The TagHelper suffix is discarded and the rest is used as the name for the TagHelper. So in this case the HTML element is called “panel”. We use the RestrictChildren attribute above the class declaration to restrict the types of HTML elements that can be used in between the ‘panel’ tag. Notice for instance how it says ‘panel-header’. Why is that not ‘panelheader’, because the class’s name is PanelHeaderTagHelper? Well like properties, which is said earlier, class names also follow the lower kebab case. And because we didn’t define another name, it will go by default.

Side note:TagHelper suffix is not required but considered a best practice convention.

This time the ‘ProcessAsync’ method is overridden. It is still not necessary, but I wanted to show you what it looks like. There we use ‘TagName’ to change the tag to a ‘div’ tag. And we make sure ‘panel’ is in the class attribute, so Bootstrap’s ‘panel’ styling can kick in.

And you can see this basically in all the ProcessAsync methods. Just changing the class to the appropriate one (panel-header, panel-body, panel-footer).

Point of interest though is the HtmlTargetElement attribute. It is there to make sure this TagHelper is used on the right HTML element. It has a ParentTag attribute to tell that it only applies if the ‘panel-header’ (or body/footer) is in between the ‘panel’ tag. But this is unfortunately not enough. See header, body and footer all apply to this rule and we would end up with divs that have all three classes. So we also need to tell exactly what HTML tag the TagHelper applies to, to prevent this from happening.

Side note: You can find much more (and better) TagHelper samples on Dave Paquette’s GitHub [5].

Conclusion

I hope we can all agree that the ideal webpage is written in nothing but HTML and Javascript. TagHelpers remove awkward C# code from our Razor Views making it look more like straight up HTML. This is especially great for the non-programmers, like designers, who just want to design great looking pages in HTML and CSS without being tangled in webs of backend code. But also for the full-stack developer who does enough context switching as is. Thank you for reading and check out the source code for this article on GitHub [4]. And don’t forget to check out SDN magazine #128[8].

Links

  1. TagHelpers vs HTML helpers: http://docs.asp.net/projects/mvc/en/latest/views/tag-helpers/intro.html#tag-helpers-compared-to-html-helpers
  2. Glob notation: https://en.wikipedia.org/wiki/Glob_%28programming%29
  3. Authoring TagHelpers: http://docs.asp.net/projects/mvc/en/latest/views/tag-helpers/authoring.html
  4. My GitHub: https://github.com/dannyvanderkraan/taghelpers
  5. Dave Paquette’s TagHelper samples: https://github.com/dpaquette/TagHelperSamples
  6. _ViewImports.cshtml: http://www.exceptionnotfound.net/the-viewimports-cshtml-file-setting-up-view-namespaces-in-mvc-6/
  7. Anti forgery token: http://blog.stevensanderson.com/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
  8. http://www.sdn.nl/portals/1/magazine/SDN-Magazine-128.pdf
Advertisements
Posted in ASP.NET Core 1.0, ASP.NET MVC, DNX, Uncategorized | Tagged , , | 4 Comments

ASP.NET Core 1.0’s DNX and MSTest

Intro

DNX is .NET’s new cross platform Execution Environment that was developed next to ASP.NET core 1.0 (formerly known as ASP.NET 5)[1]. I came across multiple sources (here for instance [2]) that claimed you can’t write tests for code based on DNX with MSTest. And although I prefer xUnit I felt bad for the MSTest community, so I decided to spend a little time on this and found that MSTest works perfectly fine! You can check out the code on my GitHub [3].

Simple DNX project

I’ve made this example without Visual Studio, just to show better what DNX is. So open your favourite file explorer and make sure at the location of your choice the folders are added as follows:

DnxAndMsTestFolders
Figure 1: Folder structure

Make at the root folder “DnxAndMsTest” a “global.json” with the following JSON in it via your favourite text editor:

global.json
Figure 2: Global.json

Side note: It’s not required, but I’ve used a “src” folder and a “test” folder for this example to keep the project clean. Via the “projects” property we can tell DNX which folders need to be included. And we can set the minimal required DNX version, which is set on “1.0.0-rc1-update1” for this example.

Then in the “SomeDnxProject” folder add a “project.json”, a “Program.cs” file and a “MessageMaker.cs” file.

Open them in your favourite text editor and add the following JSON for the ” project.json” file:

project.json
figure 3: project.json

Add the following code to the “Program.cs” file:

using System;

namespace SomeDnxProject
{
    public class Program
    {
        public static void Main(string[] args)
        {
			var messageMaker = new MessageMaker();
			Console.WriteLine(messageMaker.GetMessage());
        }
    }
}

And the following code to the “MessageMaker.cs” file:

using System;

namespace SomeDnxProject
{
	public class MessageMaker {
		public string GetMessage(){
			return "Hello from DNX!";
		}
	}
}

Side note: The “project.json” is required, because DNX uses this file for all its configurations. The “public static void Main” method in “Program.cs” file is required because DNX uses this as its entry point to run the application. “MessageMaker.cs” simply contains a method which we can simply test with MSTest later on.

Then in the “SomeDnxProject.MsTest” folder add the files “project.json”, “Program.cs” and “MessageMakerTests.cs”.

The “project.json” contains the json:

project.test.json
Figure 4: project.json for test project

In the “Program.cs” file you add:

using System;

namespace SomeDnxProject.MsTest
{
    public class Program
    {
        public static void Main(string[] args)
        {
			//Dummy Main method so DNX stops complaining.
        }
    }
}

Then add to the “MessageMakerTests.cs” the following code:

using Microsoft.VisualStudio.TestTools.UnitTesting;
using SomeDnxProject;

namespace SomeDnxProject.MSTest
{
	[TestClass]
	public class MessageMakerTests
	{
		public MessageMakerTests()
		{
		}

		[TestMethod]
		public void GetMessage_GivenNone_ExpectedDefault()
		{
			var expectedMessage = "Hello from DNX!";
			var messageMaker = new MessageMaker();
			var message = messageMaker.GetMessage();
			Assert.AreEqual(expectedMessage, message);
		}

	}
}

Go back to the root of the project “DnxAndMsTest” in a console and type “DNU restore” to create the “project.lock.json” files which DNX needs to run. Then navigate to “SomeDnxProject” folder and type “DNX run” to test everything. You should see the following:

DnxAndMsTestRun
Figure 5: DNX run

If not, then you need to check if your installation of DNX is correct and if you followed the above steps correctly. But if you see the above result you can now navigate to the “SomeDnxProject.MsTest” folder and type “DNX test” and see the following result:

DnxAndMsTestPassed
Figure 6: DNX passed

MSTest.Runner.Dnx

The key component is the assembly “MSTest.Runner.Dnx” which can be acquired via NuGet [4]. It is available since December 2015, so it is understandable that is has been missed by others. Especially because with the release of ASP.NET 5 it seemed xUnit was endorsed. But this assembly makes sure your MSTest tests get discovered, also in Visual Studio in the Test Exlorer by the way. I’ve tried it in an ASP.NET Core 1.0 project.

Conclusion

I hope I can make some MSTest users happy. You can work on the cutting edge and still use your beloved MSTest if you want. Make sure you check out my GitHub [3] for the necessary files and if any MSTest user is really happy now, let me know!

Links

  1. DNX overview: https://docs.asp.net/en/latest/dnx/overview.html
  2. Unit testing ASP.NET Core 1.0: http://www.centare.com/asp-net-core-1-0-unit-testing/
  3. Code on GitHub: https://github.com/DannyvanderKraan/DnxAndMsTest
  4. MSTest.Runner.Dnx: https://www.nuget.org/packages/MSTest.Runner.Dnx/1.0.0-rc1
Posted in .NET programming, ASP.NET Core 1.0, DNX, MSTest | Tagged , , , , | Leave a comment

Tracking code comments with Task List

Intro

I discourage the use of code comments (no keep reading please), because they are unmaintainable, don’t force you to write readable code and are often written so poorly that they don’t reveal intent anyway. I’ve written about this a while back [1].

Having said that, sometimes you want to add some code comment for various reasons. So if you’ve depleted your other options (refactoring so the code can be read on itself, XML documentation, etc…) then if you must write code comments, wouldn’t you then prefer trackable code comments?

Task List

In Visual Studio you can track your code comments in the Task List window by using certain tokens in your comment (for instance: “//TODO Water the plants”). If you don’t see your Task List you can open it by clicking View and then Task List in your menu (ctrl + W,T):

tasklistmenu
Figure 1: View –> Task List (your menu may be different)

Once you’ve clicked this menu item you’ll see the Task List window at the bottom of Visual Studio by default:

tasklistview
Figure 2: Task List window

Side note: Ignore the icon in front of the “UNDONE” token for now.

In figure 2 you can see some of the default tokens I’ve used, namely: “UnresolvedMergeConflict”, “TODO”, “HACK” and “UNDONE”.

Side note: You can change the sort order, reorder columns and show/hide columns. You can also navigate to the next and previous task in the list. [2]

Tokens

You can view, edit or even add these tokens at the Tools and then the Options menu item. And then at the Task List item under the Environment item in the Options dialog. Per default Visual Studio has the following tokens:

tasklistoptions
Figure 3: Options dialog

Side note: “UNDONE” is not entirely default in figure 3 which I’ll get back to.

As you can see tokens have unique strings (like “TODO”), but also priorities. This changes the icon in front of a token in the Task List and their order in the list. You’ve got ‘normal’ priority, which does nothing. You’ve got ‘high’ priority, which will display a red exclamation icon in front of the token and will make these types of comments appear first in the Task List. And you have ‘low’ priority, which will display a black downwards arrow in front of the token and make comments of this type appear at the bottom in the Task List.

Side note: The “TODO” token appears to be the only one you can’t mutate. This kind of spoiled my plan to change all the tokens to include the “:”, because this looks much better in your Task List. Oh well…

Custom Tokens

You can also add your own tokens. Just type in a new name at the “Name” textbox and the “Add” button will be enabled. Adjust the priority if you want. Click the “Add” button and then click “Ok” to close the Options dialog. I’ve added “CRIT” (critical) and made the priority ‘high’.

I’ll first show the code listing so far which uses all the tokens (including the custom one):

code
Figure 4: Complete code listing

Which makes the Task List look like:

tasklistview2
Figure 5: Task List with custom token (and a shortcut)

You can see all the tokens at play with on the top the ‘high’ priority tokens “CRIT” and “UnresolvedMergeConflict” and on the bottom the “UNDONE” token, which I’ve set to ‘low’ priority.

Shortcuts
But what’s that highlighted line in figure 5? Have you noticed how that’s just a line of code with no token? That is because this is the last capability of the Task List I wanted to show in this blogpost called: “Shortcuts”. I actually didn’t show you everything at figure 4 (so mean of me). If I may direct your attention at the following screenshot:

shortcut
Figure 6: Shortcut in code

Check out the icon at line 20. That is a shortcut which makes the line of code appear on your Task List. All you have to do is double click on it in your Task List and you’ll jump to this line of code from wherever you are in your solution. You can make shortcuts by placing the cursor on the line of code where you want it and then click on the Edit, Bookmarks, Add Task List Shortcut menu items.

User Tasks

Per Visual Studio 2015 “User Tasks” no longer exist. And in my humble opinion this is a good thing. Let devoted tools like TFS or Jira keep track of your tasks. They do a much better job at it.

Links:

  1. XML documentation: https://dannyvanderkraan.wordpress.com/2015/11/18/increase-productivity-with-xml-documentation/
  2. Using the Task List: https://msdn.microsoft.com/en-us/library/txtwdysk.aspx
Posted in Visual Studio | Tagged , | Leave a comment

Visual Studio Tip of the Week: Block Selection

You can select a whole block of code easily in Visual Studio by holding down the left shift + left alt and then selecting the text with the arrow keys. Imagine you have a variable called “object1” and you would like to change that in object2 for some reason, just select all the 1’s in the aformentioned way and type in a 2.

 

Posted in Visual Studio | 4 Comments

Unit Testing made easy with Dependency Injection

Synopsis: An example in which we walk through a case in which Dependency Injection made writing automatic tests, and in particular unit tests, a delight. Which also makes the discipline Test Driven Development much more an option. Find the example used for this post on my GitHub: [9]

Intro

I have introduced Dependency Injection by giving an example in which we show some late binding [1]. After that I’ve showed a possible route to take if certain dependencies depend themselves on run-time values [2]. And there’s an article about how to apply Aspect Oriented Programming (AOP) by using Dependency Injection’s Interception [3]. Now I’d like to talk about how Inversion of Control (IoC) via Dependency Injection (DI) helps you write better unit tests and eventually apply Test Driven Development (TDD). But before we can do that I feel we should first talk about automated tests itself. Because these are crucial for doing healthy TDD. In particular I noticed some confusion about integration tests and unit tests. So what is the definition of a unit test? What is a good unit test? And why does this difference matter so much?

Side note: If you don’t care about TDD, it is perfectly fine to focus on the fact that writing automated tests (unit tests in particular) is very easy if you apply DI.

Integration Test versus Unit Test

To get a good grip on this I think we should compare integration tests with unit tests.

Integration Test

Most automated tests are actually integration tests. It doesn’t matter if you used MS Test’s Unit Test item template. Whether a test is an integration test or not is actually about what is tested factually. If the ‘unit of work’ under test actually spans a couple of components working together, often including external components like a database, a file, a web service, etc., to test if they all play their part, then it is an integration test. An integration test is allowed to be slow, due to its nature. That is why often they are run during a nightly build or something similar. Figure 1 illustrates connected code components which would have to be tested via an integration test with long running processes like calling a web service, persisting data in the database and writing data to a file on the hard drive:

Figure1IntegrationTest

Figure 1: Integration testable

Unit Test

Tests that define a  clear ‘unit of work’, without any external component involved, and just enough components to test that what must be tested, is an unit test. To emphasize the remark about external components; a unit test can’t extend to persistence layers like databases, or files on disk, or web services somewhere on the internet. And if too much components are involved, the test does too much and can’t be read easily. Unit tests are mainly for teams of developers to keep proving to themselves that the code still works as intended. Components not directly involved in the current test need to be faked (stubbed or mocked). Don’t worry if some terms are not familiar. I am working on an in-depth article about unit testing and TDD, but if you read “The Art of Unit Testing” by Roy Osherove [4] you will know everything you need to know and more! He has for instance an extensive (and much better) definition of a unit test on his website [5]. Figure 2 illustrates with red crosses where the connections in figure 1 would need to be severed to make it unit testable:

Figure2UnitTest

Figure 2: Unit testable

So why does this matter?

Unit Tests need to be fast, concise and easy to read. In an Agile Scrum environment for instance where you need to release a potentially shippable increment at the end of every sprint you need to maximize your feedback loops. You want to know on codebase level at every moment if everything is still in order. That is why a Scrum team should strive for automated unit tests that fire off with every source control check in. And that is why they need to be fast. Because if they are slow you don’t want to fire them off with every check in. And because they need to be fast and need to tell you at a glance where something is wrong, they need to be small and concise. Because they are small and concise you will have a lot of tiny unit tests. And the Scrum team will need to maintain them. If unit tests are not readable they will be ignored and poorly maintained, effectively killing off the effect of unit testing. I’ve summarized this in figure 3:

Figure3MaximizeFeedbackloop

Figure 3: Maximize feedbackloop

Now that I’ve illustrated the difference it is time to move on to actual code!

Unit test without DI

I started adding (as I thought) unit tests to the solution. It has been proven to be a good practice if you make a separate project for your tests and keep them out of your main project [6]. I started out by using the Unit Test Project template. All this mainly does is add the correct assemblies to start using MS Test basically. Plus the tests show up in Visual Studio’s Test Explorer out of the box. For this example however I’ve used XUnit instead of MS Test (if you are interested in my reasons why you can check out a post at Mark Seemann’s blog [7]). The tests and classes from figure 1 would look like:

Listing1FirstAttemptSomeService

Listing1FirstAttemptSomeRepository

Listing1FirstAttemptSomeClient

Listing1FirstAttemptSomeLogger
Listing1FirstAttempt
Listing 1: First attempt at SomeService and its unit test

So what are the problems with the unit test above? The test takes too long. And it needs to assert too much. If you add up the milliseconds you’ll be looking at 8000 milliseconds in total. Of course fictive and a bit exaggerated for the purpose of this example, but even half of that would be way too much for a simple unit test. This is caused by calls to external components which simply take longer processing time. So as figure 2 showed us we need to sever the connections (read: Dependencies) and actually not write something to the database, or call a web service, or write a log file, if we simply want to unit test DoStuff. To be able to do this we need to stop initializing everything in the DoStuff method. That is where Dependency Injection is going to help us. Listing 2 shows our second attempt with ImprovedSomeService:

Listing2ImprovedSomeService

Listing 2: ImprovedSomeService

And the unit test could be something along the lines of:

Listing3SecondAttempt

Listing 3: Second attempt at unit test with ImprovedSomeService

As you can see in listing 3 Dependency Injection makes it very easy to fake all the components that are not needed in the ‘unit of work’ at hand. In this example the free library FakeItEasy [8] is used to create mocks and stubs. The unit test begins by making some fakes (FakeItEasy makes syntactically everything a fake and the context decides whether it is used as a mock or a stub). These fakes are injected into the service via its constructor. Then the test simply proceeds by asserting if some calls to certain methods actually happened and whether the return value is true from the method DoStuff. This test is focused and concise, plus the time this test takes is not even worth mentioning.

TDD

It should be easy to see via listing 3 how Dependency Injection can make your pain with TDD a little lighter. Let’s throw the familiar diagram on here to provide some context:
Figure4redgreenrefactor

Figure 4: TDD cycle

As you know you write the test first, so imagine we had the requirement to ‘do’ some ‘stuff’. I write the test first, so I try to call a DoStuff method which didn’t exist. This is the red state. I generate the method. The test becomes green. I refactor this by making SomeService class, putting the DoStuff method in there and initializing SomeService to call DoStuff on it in the test. And you repeat the process by wanting to know in the test if DoStuff failed or succeeded. The test is red. DoStuff needs a Boolean return value and we return true in the method. Test is green. During refactoring we could for instance only return true if some operation actually succeeded. This operation could for instance be putting stuff in the database. So in the DoStuff method of SomeService I call WriteStuffToDb on the non-existent class SomeRepository. Making the test become red again. I generate the class and the method. I am not interested right now in the actual implementation of the call, so to make the test green I fake the class and test if the method is called at all. Then I refactor this so the implementation of ISomeRepository is injected through the constructor making it easy for the next iterations to insert dummy repositories. And so on…

Conclusion

I showed you the difference between integration tests and unit tests. Then I showed you how Dependency Injection makes unit testing a lot easier. Finally I brought some TDD into the mix. You can find all the code and more on my GitHub: [9]. This article is also published in Dutch magazine #127 called SDN which you can download here: [10]. I recommend that you do because Pepijn Sitter [11] wrote a cool article about Roslyn’s Analyzers, Gerald Versluis[12] about Xamarin.Forms, and so on. So go check it out.

Side note: Used Ninject as IoC Container because the API is awesome!

Links

  1. Introduction Dependency Injection: https://dannyvanderkraan.wordpress.com/2015/06/15/real-world-example-of-dependeny-injection/
  2. Dependency Injection based on run-time values: https://dannyvanderkraan.wordpress.com/2015/06/29/real-world-example-of-dependency-injection-based-on-run-time-values/
  3. AOP with Interception: https://dannyvanderkraan.wordpress.com/2015/09/30/real-world-example-of-adding-auditing-with-dependency-injections-interception/
  4. Art of Unit Testing: http://artofunittesting.com/
  5. Definition of a Unit Test: http://artofunittesting.com/definition-of-a-unit-test/
  6. Great discussion about seperate projects for tests: http://stackoverflow.com/questions/2250969/should-unit-tests-be-in-their-own-project-in-a-net-solution
  7. Reasons for XUnit: http://blog.ploeh.dk/2010/04/26/WhyImmigratingfromMSTesttoxUnit.net/
  8. FakeItEasy: https://github.com/FakeItEasy/FakeItEasy
  9. GitHub: https://github.com/DannyvanderKraan/DependencyInjectionMakesUnitTestingSimple
  10. SDN 127: http://www.sdn.nl/portals/1/magazine/SDN-Magazine-127.pdf
  11. Pepijn Sitter: https://nl.linkedin.com/in/pepijn-sitter-65514a
  12. Gerald Versluis: https://nl.linkedin.com/in/jfversluis
Posted in .NET programming, Agile Scrum, Dependency Injection, Software design patterns | Tagged , , , , | 2 Comments

Increase productivity with XML documentation

Intro

This post was originally a guest blog which you can find: here. Let’s talk about commenting code for a second. I was triggered by this rant http://blog.codefx.org/techniques/documentation/comment-your-fucking-code/ to examine how we actually do this on our team. And as I reviewed our codebases I quickly discovered that almost each and every one of us has a totally different style of commenting. I suggested during a Sprint Retrospective to the team to not only comment our code in the same way, but also in the same manner. In an Agile Scrum environment where seniors leave the team and juniors enter the team we all could agree that this was going to help our code become more readable. To help us getting started on this I introduced the XML Documentation Comments: https://msdn.microsoft.com/en-us/library/b2s063f7.aspx. A style of commenting code I am in to. I’ll first briefly explain what it is and then I’ll explain why I prefer XML documentation over inline comments.

What is XML documentation?

Adding XML documentation is like adding annotations above classes and methods. Except they are not metadata and not included in the compiled assembly (and therefore not accessible through reflection). You can write XML documentation by typing triple slashes directly above the line with the class or method declaration. The Visual Studio IDE will add the correct and commonly used tags. So above a class it looks like:
2015-10-12 16_15_41-XMLTagAboveClass
Figure 1: XML documentation above class

Side note: I am showing wrong comments (as in not revealing intent) on purpose. We’ll get back to that later.

And above a method as follows:
2015-10-12 16_15_41-XMLTagAboveMethod
Figure 2: XML documentation above method

Side note: Notice how the tag ‘param’ is added automatically with the attribute name and the correct parameter name. This is monitored by the IDE and will produce a warning if it’s not correct, so very maintainable.

Above a method with a return type:
2015-10-16 11_18_39-XMLTagAboveMethodWithReturnType
Figure 3: XML documentation above method with return type

Side note: Please note the escape characters around ‘IAgenda’.

The above examples are automatically generated and therefor commonly the most used. If you want to play around with other tags which are recommended for use you can follow this link: https://msdn.microsoft.com/en-us/library/5ast78ax.aspx

Why XML Documentation over inline comments?

So why do I prefer XML documentation over inline comments? Well, there are several reasons:

  • Forces you to think about your comments
  • Maintainable
  • Provides IntelliSense
  • Automatically builds API documentation

I’ll explain them in order:

Forces you to think about your comments

As I’ve said in the side note beneath figure 1 I have intentionally made some horrible comments. Have you ever come across something like:

//Gets a new Foo
public static IFoo GetNewFoo(ICustomer customer){
//New up a Foo
IFoo foo = new Foo();
//If IsPrefered = true then add collection of Bars
if(customer.IsPrefered){
foo.Bars = GetBars();
}
//Return new Foo
return new Foo();
}

What do these comments say? Absolutely nothing more than the code that is written. What should comments do? They should reveal the business intent behind code. They should tell the new guy that has to clean up your mess why the code was written the way it was back then. Let’s try this with more business intent in the mix:

/// <summary>
/// Gets a new Foo for the customer. If the customer is a preferred customer they are ///entitled to a collection of Bars.
/// </summary>
/// <returns>A new instance which adheres to the IFoo interface, which contains a ///collection of Bars depending on whether or not the customer has the preferred status.</returns>
public static IFoo GetNewFoo(ICustomer customer){
IFoo foo = new Foo();
if(customer.IsPreffered){
foo.Bars = GetBars();
}
return new Foo();
}

This example is terribly contrived, but I hope I can make clear what I mean with comments that portray business intent. And I’ve experienced that thinking about good XML Documentation forces you to think more about the comments you write down.

Maintainable

XML Documentation is checked to some degree by the Visual Studio IDE, generating warnings where needed. Inline comments are not checked at all, and how could they be checked? There’s no structure.

Provides IntelliSense

How nice is it to tap the period key, focus on the properties/methods and see a clear and concise explanation? Fantastic right? Imagine having a college seeing the IntelliSense you provided and understand what it does. Awesome right? So to give a concrete example, the IntelliSense for the method in figure 3 would be:
2015-10-16 15_18_48-XMLDoc_IntelliSense

Automatically builds API documentation

You’ve read that correctly. There’s a free open source tool out there which is called Sandcastle (https://github.com/EWSoftware/SHFB) which generates your API documentation based on these XML Documentations. Hook Sandcastle into your Build Definition on TFS and have it generate API documentation as part of your Continuous Integration pipeline!

Conclusion

I hope after reading my blogpost I’ve made you enthusiastic about XML Documentation as well. There are some nice benefits to be had, so drop this in your team the next Sprint Retrospective and see how it will land.

Posted in .NET programming, Visual Studio | Tagged , | 1 Comment

Real World Example of Adding Auditing With Dependency Injection’s Interception

Synopsis: This article walks through a real world example in which the Crosscutting Concern (1) ‘Auditing’ was added to the application without breaking the Single Responsibility Principle (2), nor the Open/Closed Principle (3), by utilizing Dependency Injection’s Interception. Get the example from my GitHub: https://github.com/DannyvanderKraan/DependencyInjectionInterception . If you can read Dutch you can also download the SDN magazine (Microsoft Azure/Xamarin) in which this article is featured: http://www.sdn.nl/portals/1/magazine/SDN-Magazine%20126.pdf

Intro

For this article I assume you have some basic knowledge of Depency Injection. But if you don’t, not to worry! I have introduced Dependency Injection (4) and showed a possible route to take when the dependencies depend on run-time values (5). Common scenario: After a few months of developing the core domains of an application ‘auditing’ capability needs to be added. Adding Crosscutting Concerns like auditing to an existing application can be quite a hassle if the architecture hasn’t taken it into account from the ground up. However, taking all Crosscutting Concerns into account while ‘growing’ the architecture is not beneficial in an Agile Scrum environment, in which you are supposed to deliver potentially shippable increments every sprint. Enter the domain of Aspect-oriented Programming (6). AOP promotes a loosely coupled design with which you can mitigate these problems. Add in an Inversion of Control (7) framework that supports AOP by enabling Interception (like Unity) and Crosscutting Concerns become a breeze. Let me show you how I added auditing and let us scratch the surface of Interception.

Case

As this particular application deals with patients and their medical data it is important to log which user did what, when and where. Next to a mutation database on the medical data (which is a core domain due to the nature of the application) I also wanted to log things more in detail, like: User A read prescription X of patient Y at July 23th 2015 8:40 am. Or User B authorized contact moment #11 of patient B created by User A at July 23th 2015 8:42 am. This is not a concern the first few sprints of development. But at a certain point you’ll want to add this auditing capability and you’ll have to crack open several classes to insert said functionality, violating the Open/Closed principle. And the Single Responsibility Principle, because now the classes are apparently also responsible for auditing? We’ll get to the solution for this problem with Dependency Injection and Interception soon. But to really understand what is going on, let’s solve this problem by taking a look at the Decorator Pattern (8) with poor man’s Dependency Injection first.

Sidenote: This blogpost does not address the Decorator Pattern with inheritance due to its drawbacks, which is a subject for another article perhaps.

Decorator Pattern

Figuur 1 Decorator Pattern in actie
Figure 1: Decorator Pattern in action!

We have a class named PrescriptionService which implements interface IPrescriptionService. The interface consists out of only one method: GetPrescriptionByID. GetPrescriptionByID is implemented as follows:

Console.WriteLine("{0} is called!", nameof(GetPrescriptionByID));
IPrescription prescription = Program.Container.Resolve();
prescription.ID = ID;
switch (ID)
{
case 1:
prescription.PatientID = 1;
prescription.MedicationName = "Aspirin";
prescription.Dosage = "2 tablets each day";
break;
case 2:
prescription.PatientID = 1;
prescription.MedicationName = "Unisom";
prescription.Dosage = "1 tablet each day";
break;
case 3:
prescription.PatientID = 2;
prescription.MedicationName = "Dulcolax";
prescription.Dosage = "2 tablets every other day";
break;
case 4:
prescription.PatientID = 3;
prescription.MedicationName = "Travatan";
prescription.Dosage = "3 drops each day";
break;
case 5:
prescription.PatientID = 4;
prescription.MedicationName = "Canesten";
prescription.Dosage = "Apply 6 times each day";
break;
default:
throw new ArgumentException(String.Format("{0} has unknown value", nameof(ID)));
}
return prescription;

Listing 1: GetPrescriptionByID

But after GetPrescriptionByID is called we want to log this event. So I’ve added AuditingPrescriptionService whose sole purpose it is to log this call. It wraps the original PrescriptionService by demanding an instance of any class that implements IPrescriptionService in its constructor (Constructor Injection). And it needs to know the user for my Auditing requirements, so demands that too in its constructor. Like so:

public AuditingPrescriptionService(IPrescriptionService service, int userID)
{
if(service == null)
{
throw new ArgumentNullException(nameof(service));
}
if(userID <= 0)
{
throw new ArgumentOutOfRangeException(nameof(userID));
}
this.Service = service;
this.UserID = userID;
}

Listing 2: Constructor AuditingPrescriptionService

And the method GetPrescriptionByID is augmented in AuditingPrescriptionService:

IPrescription prescription = Service.GetPrescriptionByID(ID);
//TODO write to log that user ? read prescription ? of patient ? at a certain date and time
return prescription;

Listing 3: Augmented GetPrescriptionByID

Let us not worry about the actual implementation right now. What is important to note however is that with this approach you will have to write a decorator for everything you need to wrap. And what if you have more than one Crosscutting Concern to implement, like caching, or authorizing? The number of Decorator classes will grow exponentially. Lots of little objects will have to be created to implement all Crosscutting Concerns creating a potential maintenance nightmare. The solution as said before is Interception, but let’s take a brief look at how this problem could be solved with an AOP framework as well, to fully appreciate Interception.

AOP Framework

There are lots of fantastic AOP frameworks out there for .NET like LOOM.NET, Aspect.NET and Spring.NET supports it now too. But PostSharp must be one of the most popular frameworks at the moment of writing this. There are loads of aspects to choose from in PostSharp and it can be as easy as moving the caret to the name of the class or method you want to enhance and choosing the pattern from the light bulb (smart tag in versions previous to Visual Studio 2015). If we want to add logging to a method all you have to do is follow these simple steps (9). It is like magic right? Why wouldn’t I want to use an AOP framework?

There are a few reasons in order of importance to me:

  1. It’s tightly coupled to the AOP framework’s implementation of logging, even if you have choice.
    Due to the nature of of the branche I work for the operational environment has some restrictions which is why I have to build specialized logging.
  2. If you’ve followed the steps from the link above to add logging you have seen that the attribute [Log] is added on top of a method. Without settings it’ll be default logging. So if I don’t want default behaviour anymore I need to adjust the logging attribute. I’ll have to change all the [Log] attributes to be clear on this.
    I can see people’s minds change a lot of times in the nearby future about logging (often pushed from our government), so I need something more flexible.
  3. You can’t late bind these aspects through the configuration file since these aspects are compiled into the assemblies.
    An extension to points one and two in a way. Some operational environments will require another behaviour from the logging than other environments. If I can’t tell at run time which logging aspect I need then I won’t be able to fulfil this requirement.
  4. Complexity of unit tests increases by using an AOP framework
    I am a big fan of TDD and while Dependency Injection actually caters to unit testing from the ground up, AOP framework impedes unit testing to a point where your tests become harder to read (and people feel the need to write documentation about how to actually go about doing it: (10)).

SideNote: What is important to mention here is that I am not against AOP frameworks. On the contrary, I don’t understand the whole DI versus AOP topics out there, since I think they complement each other in certain architectures. If you are interested in this kind of information I can recommend a good blogpost by Kenneth Truyers: (11) (keep in mind he favours DI in this post though). And PostSharp for instance has the SkipPostSharp flag which you can apply in your testproject.

These problems sketched above can easily be solved with an IoC Container’s ability to dynamically apply Interception.

Dependency Injection and Interception

Interception looks conceptually like this:
Figuur 2 Concept of Interception
Figure 2: Concept of Interception

Some client calls method GetPrescriptionByID of the PrescriptionService class and the call gets ‘intercepted’ by some Logging behaviour first, hence the name. In the above figure for illustrative purposes I have drawn a ‘chain’ of Interceptions (and the funny thing is that the arrows look like a chain this way). All three very common Crosscutting Concerns which you can chain together. So in this case the retrieved Prescription gets cached for a period of time apparently, but not before there is a check if the user is authorized to see prescriptions at all. And the call gets logged no matter what. Boy, I really wish I could add these behaviours (or ‘aspects’) dynamically later on in development. But I can! Just need a little help of Unity.

Interception with Unity

There are a few simple steps:

  1. Make sure you get Unity.Interception from NuGet
  2. Add AddNewExtension<Interception>(); in the Composition Root
  3. Add a class named LoggingInterceptionBehavior (the ‘InterceptionBehavior’ part is a convention you don’t necessarily need to abide to)
  4. Add usingPractices.Unity.InterceptionExtension; on top of the file
  5. Have the class implement interface IInterceptionBehavior (that is how Unity actually understands this class can be used to intercept calls)

The interface has two methods and a property you need to implement:

public interface IInterceptionBehavior
{
bool WillExecute { get; }
IEnumerable<Type> GetRequiredInterfaces();
IMethodReturn Invoke(IMethodInvocation input, GetNextInterceptionBehaviorDelegate getNext);
}

Listing 4: Interface IInterceptionBehavior

The property WillExecute is to determine if it makes sense to have the call be intercepted by this behavior. If it doesn’t make sense you can have this property be set to false and you won’t have the unnecessary overhead of the proxy or intercepting class generation. I want my logging behaviour to always run, so I will simply let WillExecute return True (get { return true; }).

The method GetRequiredInterfaces is basically there to pre-set which types you would like to intercept with this behaviour. But I like to do that in the Composition Root (and so should you), so I just let this method return an empty array of type Type (return Type.EmptyTypes;).

The Invoke method is where the real work gets done. The parameter ‘input’ represents the call to a method (12) and getNext is the parameter which actually contains a delegate which can be invoked to get the next delegate to call, to keep the chain going (13). For our AuditingInterceptionBehavior I have implemented it as follows:

//The logged in user happens to be Danny all the time. ;-)
string identity = "Danny";
//Before: You can write a message to the log before the next behaviour in the chain/intended target gets called.
WriteLog(String.Format(
"{0}: User {1} {2}. Technical details: Invoked method {3}",
DateTime.Now.ToLongTimeString(),
identity,
(input.MethodBase.GetCustomAttributes(typeof(DescriptionAttribute), false).FirstOrDefault() as DescriptionAttribute)?.Description,
input.MethodBase));
//Actual call to the next behaviour in the chain or the intended target.
var result = getNext()(input, getNext);
//After: And you can write a message to the log after the call returns.
if (result.Exception != null)
{
//You can for instance write to the log if an exception has occurred.
WriteLog(String.Format(
"{0}: Method {1} threw exception: {2}",
DateTime.Now.ToLongTimeString(),
input.MethodBase,
result.Exception.Message));
}
else
{
//Or you can write more useful information.
WriteLog(String.Format(
"{0} User {1} {2}: {3}. Technical details: Returned from method {4}",
DateTime.Now.ToLongTimeString(),
identity,
(input.MethodBase.GetCustomAttributes(typeof(DescriptionAttribute), false).FirstOrDefault() as DescriptionAttribute)?.Description,
result.ReturnValue,
input.MethodBase));
}
return result;

Listing 5: Implementation of the Invoke method

Side note: I decorated the GetPrescriptionByID in the interface with a Description attribute and I have overridden the ToString method in Prescription so the message is easier to read for display purposes.

The actual implementation is not important.
The important part is in the middle: var result = getNext()(input, getNext); this actually keeps the chain going. By invoking the delegate GetNext() a InvokeInterceptionBehaviorDelegate is returned with which you can either call Invoke on the next behaviour or invoke the intended target depending on the chain.
The other important bit is: return result; which keeps the chain going back up by returning the IMethodReturn instance (13). It contains the intended return value in its property ReturnValue which is used in listing 4. Another useful property is used in listing 4, which is Exception, so you can handle exceptions in the chain. WriteLog is actually a simple call to the Console (Console.WriteLine(message);) as this example is a Console Application, but it could be a call to your auditing library of choice of course.

Now in the Main method of the Program class, which we’ll use as the Composition Root, the following lines of code are needed:

Container.RegisterType<IPrescription, Prescription>();
Container.RegisterType<IPrescriptionService, PrescriptionService>(
new Interceptor(),
new InterceptionBehavior());
IPrescriptionService service = Container.Resolve();
IPrescription prescription = service.GetPrescriptionByID(1);
Console.WriteLine("Retrieved: {0}", prescription);
Console.ReadKey();

Listing 6: Main method of class Program

Let’s have a brief explanation about the Interceptor types. As you can see at “new Interceptor” we are using the “InterfaceInterceptor”, because the class only implements one interface. Always try to use this type of intercepting for performance reasons. Only use TransparantProxyInterceptor if your class implements more than one interface (or no interface at all) and you’d like to add behaviour to all implemented methods. The aforementioned intercept types work by dynamically creating a proxy object which is not type compatible with the target object. Try it. If you write for instance “?service.GetType()” in the Immediate Window you’ll get something like “DynamicModule.ns.Wrapped_IPrescriptionService_83729647ce664f89b08534edc98a7858” as FullName. If you want more control over your behaviours and you need to intercept internal calls in the class (or abstract classes) you’d need VirtualMethodInterceptor. But because it works by extending the type of the target object you’ll need to make your methods virtual, the class public and it cannot be used on existing objects. Only by configuring the type interception. The chain of behaviours is established by overriding the virtual methods in the base type.

For this example InterfaceInterceptor was enough. And in conclusion Listing 5 will output:
Figuur 3 Output
Figure 3: Output

As you can see in figure 3 “GetPrescriptionID is called!” is right in between the two messages from the AuditingInterceptionBehavior thus showing that this behaviour was added dynamically in the call chain.

Question: A question I got specifically about this example is what you should do if you need a finer detail of Auditing and you need to log messages within the method?
If you feel the need to do this I sincerely implore you to look critically at your design. Are your methods not too large and doing too much? And then change the design accordingly, so you get Auditing exactly where you need it.

Conclusion

Thank you for reading my article. I would like to leave you with the following thoughts. First of all, we’ve scratched the surface here of Interception, so be prepared to learn more along the way. Second of all, Dependency Injection’s Interception is not the silver bullet to Crosscutting Concerns as AOP frameworks are not either. Don’t be afraid to use what suits your needs the most in your current architecture. If you can read Dutch you can also download the SDN magazine (Microsoft Azure/Xamarin) in which this article is featured: http://www.sdn.nl/portals/1/magazine/SDN-Magazine%20126.pdf. Now go have fun with my example on GitHub and add some behaviours: https://github.com/DannyvanderKraan/DependencyInjectionInterception

Links

  1. Crosscutting Concern: https://msdn.microsoft.com/en-us/library/ee658105.aspx
  2. SRP: https://en.wikipedia.org/wiki/Single_responsibility_principle
  3. Open/closed principle: https://en.wikipedia.org/wiki/Open/closed_principle
  4. Intro DI: https://dannyvanderkraan.wordpress.com/2015/06/15/real-world-example-of-dependeny-injection/
  5. DI and run-time values: https://dannyvanderkraan.wordpress.com/2015/06/29/real-world-example-of-dependency-injection-based-on-run-time-values/
  6. AOP: https://en.wikipedia.org/wiki/Aspect-oriented_programming
  7. IoC: https://en.wikipedia.org/wiki/Inversion_of_control
  8. Decorator Pattern: https://en.wikipedia.org/wiki/Decorator_pattern
  9. PostSharp’s simple steps: http://doc.postsharp.net/logging
  10. PostSharp testing: http://doc.postsharp.net/simple-tests
  11. DI vs. AOP: http://www.kenneth-truyers.net/2013/05/16/why-choose-di-interception-over-aspect-oriented-programming/
  12. IMethodInvocation: http://www.nudoq.org/#!/Packages/Unity.Interception/Microsoft.Practices.Unity.Interception/IMethodInvocation
  13. GetNextInterceptionBehaviorDelegate: http://www.nudoq.org/#!/Packages/Unity.Interception/Microsoft.Practices.Unity.Interception/GetNextInterceptionBehaviorDelegate
  14. IMethodReturn: http://www.nudoq.org/#!/Packages/Unity.Interception/Microsoft.Practices.Unity.Interception/IMethodReturn
Posted in .NET programming, Dependency Injection, Software design patterns | Tagged , , | 3 Comments