Primer on going serverless with Azure

What is serverless?

A good and concise definition can be found here:

“Serverless architecture is an approach that replaces long-running virtual machines with ephemeral compute power that comes into existence on request and disappears immediately after use.”
~ https://www.thoughtworks.com/radar/techniques/serverless-architecture

Pros

  • Reduced Ops because there are no (read strongly reduced) concerns about the actual technical infrastructure; making DevOps easier for developers
  • Automatic scaling; comes out-of-the-box for free.
  • Pay per compute; pay only for what you actually use.
  • Time to market is reduced.

Cons

  • Loss of control because updates to underlying infrastructure are automatic and could cause depended code problems
  • Scaling is automatic and could actually flood components in the system that are less capable of scaling.
  • Pricing is done per compute and could therefore spin out of control.

Serverless on Azure?

On Azure there are two offerings that are going to make serverless viable, namely Azure Functions and Logic Apps.

Azure Functions

Azure Functions are basically under the hood App Services, build on top of Webjob SDK and being run in their own Azure Functions run-time. They make it possible to write small and concise event triggered software components that run in these ephemeral containers which make it possible to only charge you for how long it takes for your code to run. Because they run in ephemeral containers, and there’s no worries about technical infrastructure, they can scale themselves automatically as needed.

functions

Let’s make one.

Walkthrough creating an Azure Function

azure-functions-figure-1

  1. First sign in to portal.azure.com
  2. Click on the green plus sign
  3. Choose Compute
  4. Choose Function App

azure-functions-figure-2

  1. Type in an app name (this will become part of the URL)
  2. Choose your subscription
  3. Choose or create your resource group
  4. Choose your hosting plan (Consumption Plan is dynamic billing and App Service Plan is static like traditional billing)*
  5. Choose your location
  6. Choose your storage account**
  7. Click on the blue “Create” button

* Choose wisely because you can’t change hosting plan afterwards!
** Will be used to store logs, etc.

Now find “my1stfunction” or whatever you called it in your resources and when you click on it you’ll see:

azure-functions-figure-3

  1. Choose Webhook + API as scenario
  2. Leave language on C#
  3. Click on the blue “Create this function” button

azure-functions-figure-4

By default it created “HttpTriggerCSharp1” this way, which is the actual function. As you can denote from the screenshot you can add multiple functions to one Function App. You start in the “Develop” pane where you see the actual code of the function. It starts you off with a short tutorial guiding you along the different panes which make up an Azure Function. For now let’s just copy the Function’s Url (see encircled in red), open up a new tab and paste it in. Then add at the end of the Url: &name=Danny. Because as you can see the default code expects a parameter in the querystring or the body with the key “name”. And when you hit enter you should see the following screen:

azure-functions-figure-5

Logic Apps

Logic Apps are to developers what Microsoft Flow is to non-developers and also allows you to create business processes and workflows visually. But it has a lot more power than Microsoft Flow and opens up a world of opportunities. Logic Apps have originated more from integration concerns in enterprise environments and is part of the iPaaS offerings of Azure. It is considered part of the serverless offerings of Azure because it shared the same characteristics, being: No technical infrastructure concerns, scales automatically, pay for what you use.

Let’s create one.

Walkthrough creating a Logic App

  1. First sign in to portal.azure.com
  2. Click on the green plus sign
  3. Choose Enterprise Integration
  4. Choose Logic App

  1. Type in a name
  2. Choose your subscription
  3. Choose or create your resource group
  4. Choose your location
  5. Click on the blue “Create” button

  1. Click “Logic App Designer”
  2. Click “When a new tweet is posted”

Logic Apps start processing with the help of triggers. Here we use a Twitter trigger, which will be fired each time a new tweet is posted with a certain tag:

  1. Enter a search term
  2. Click “New step”

  1. Choose “Add an action”

  1. Choose “Slack – Post Message”

  1. Enter a channel name
  2. Click here (or click “Add dynamic content”) to open the panel on the right
  3. Click TweetText

The Logic App should look like this in the designer depending on your settings (I choose to look for “#azure” at Twitter and I drop them in a channel at Slack called “#sdn”). Warning: Beware that if you look for #azure you’ll get a lot of messages! Follow the following steps:

  1. Click “Save”
  2. Click “Run”

Now, as per the settings for frequency at the Twitter trigger, this Logic App will drop the Tweets in your chosen Slack channel with the search term you specified.

Logic Apps ❤ Azure Functions

The biggest gamechanger for serverless on Azure however is the ability to use Azure Functions in Logic Apps! Let’s see how easy it is to add a step as an Azure Function in a Logic App.

First go back to the Azure Function you created and make the sure the code looks like:

#r "Newtonsoft.Json"

using System;
using System.Net;
using Newtonsoft.Json;

public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
{
    string jsonContent = await req.Content.ReadAsStringAsync();
    dynamic data = JsonConvert.DeserializeObject(jsonContent);

    log.Info(jsonContent);

    return req.CreateResponse(HttpStatusCode.OK, new {
        greeting = $"New tweet from {data.TweetedBy}!"
    });
}

Then in the Logic App choose to add a new step so you get the following screen:

  1. Choose “Azure Functions”

  1. Click request body (or “Add dynamic content”) to open the pane on the right
  2. Choose “Body”

  1. Click message text (or “Add dynamic content”) to open the pane on the right
  2. Choose “Body” (make sure it’s from the Azure Function and not from the Twitter trigger)

Now if the Azure Function step is behind the Slack – Post message step, then simply drag and drop the Slack – Post message step down so that it looks like*:


*Of course things like the name of the Azure Function and the Slack channel etc. might differ.

Now if you check in the Slack channel you see who the tweets are from.

Conclusion

Lots of interesting developments going on in the Azure cloud! And serverless is not a hype (like they used to think about cloud technology), but it is becoming more and more a viable option for enterprises. Want to read more of my thoughts on this subject you can check an earlier blog out here. Want to check out my slides I used at SDN Event 2017, March the 17th, about this subject you can check that out here.

Posted in azure, serverless | Tagged , , , | Leave a comment

NDepend 2017: Static Code analyzer for .NET Core

I was excited in June last year about C# MVP Patrick Smacchia’s static code analyser NDepend and you can read about that here. NDepend has a new release and in this blog I’ll highlight the features that interest me the most, like support for .NET Core. But before we start off with that, first let’s have a look at the integration with Visual Studio 2017, which is important now for writing ASP.NET Core 1.0 web apps.

Support for Visual Studio 2017

Just follow these instructions and you can add NDepend as an extension to Visual Studio 2017! Once installed, as per the aforementioned link, your extensions and updates window looks like:

The NDepend extension is circled in red. You’ll have the familiar menu items available to you:

But of course as expected with a new release a whole slew of new options as well, as the VSTS Integration menu item circled in red, which I’ll get back to in a separate paragraph. This paragraph is about the integration with Visual Studio 2017, which is improved even further with this release by giving more options on when and how to analyse the code:

You can refresh NDepend’s results automatically by starting NDepend’s analysis after each successful built. But this option wasn’t very useful, as it got in the way. With this release you can prevent this behavior when building for a run/debug or unit-test session. Now you can safely integrate it in your feedback loop without it interfering! Plus the refresh interval is now more finely grained to minutes, which compliments to new options greatly.

After it’s integrated in Visual Studio 2017 you can let NDepend go on .NET Core projects!

Support for .NET Core

I am very passionate about .NET Core and that is why the support by NDepend for .NET Core deserves a paragraph on my watch! One of the things that needed to be solved is the ability to target multiple frameworks with a .NET Core project. As you know because of this the output directories will look for instance like: “.\bin\debug\net451”, and: “.\bin\debug\netcoreapp1.0”. These output directories can be resolved now properly.

Project.json will not be supported by NDepend, but why would you want that? If you haven’t already you should really migrate your .NET Core apps to the new .csproj files and use Visual Studio 2017 to develop these projects. NDepend 2017.1 will be able to analyze the .NET Core assemblies and PDB and source files that belong to the assemblies by default. Haven said that, you can always of course simply analyze assemblies in folders if you have a project.json project that you really want to have analysed.

And when you let Visual Studio Team Services build your projects on a build server you can now also use NDepend to guard the quality of your code!

Support for Team Services

NDepend now has an extension in the marketplace which you can use as a task in your build process. It analyses the code using all the same rules you are accustomed to and makes the metrics available to you via NDepend’s hub. So you’ll have the familiar dashboard on which you can see the brand new metrics for technical debt and the passes and fails of the brand new quality gates. Both will be handled in separate paragraphs.

Once installed (or downloaded in the case of using TFS) you can guard metrics like the brand new technical debt, which we’ll discuss now.

Smart Technical Debt Estimation

First of all, what is technical debt? A quote from Martin Fowler:

“In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.”

And don’t forget Ward Cunningham’s (coined the phrase) YouTube video: https://www.youtube.com/watch?v=pqeJFYwnkjE.

I’m excited about the addition of the smart technical debt estimation as it allows you to show management with clear and concise graphs what the result would be in money on the long run if the team cuts corners here and doesn’t invest time in the architecture there. Management doesn’t speak code, they speak money. For you to succeed as a software engineer it’s important to understand this.

How does NDepend express technical debt is clear from their docs: “The technical-debt is the estimated man-time that would take to fix the issue.” And man-time can be translated to… Money! NDepend expresses technical debt through a set of default rules which are again simply written in LINQ queries. So again very easy to customize to your own needs, or created from scratch. Let’s see how they’ve implemented this by going to the queries and rules explorer:

Red encircled is the addition of “Debt” to the Project Rules category. Here you can see a number of metrics to aid in determining the estimated technical debt, like for instance: “Percentage Debt (Metric)”:

As you can see in the Queries and Rules editor you have more metrics to your disposal, like the estimated total time to develop te code base and the estimated total time to fix all issues. Which can be influenced by the new settings found at the new Issue and Debt tab of the NDepend Project Properties:

You can for instance change the “Average cost of man-hour of development” if this is different in your organisation to estimate the debt better. Now after adding more code to the solution and rerunning the analysis I can see in the dashboard what the effect is on the debt:

Apparently I’ve built up my technical debt, because it went from 11,8% to 19,65% and my rating has gone up from B to C, which means in man-hours I went up from 2 hours to 4 hours and 44 minutes. Obviously this is by estimation, but it can give you an idea. Also to get a more accurate look I’d need to import coverage data of my automated tests.

And you can even enforce the reduction of technical debt if it gets too big with the new quality gates!

Quality Gates

With Quality Gates you can enforce code quality rules by disallowing commits to source control if certain metrics get too high or low. These are again LINQ queries which you can adjust or create. The biggest selling point of NDepend in my opinion, as I’ve written about before. So if again you go to the Queries and Rules Explorer:

And you look up the Quality Gates category (encircled in red), we have 14 default queries. We’ll have a quick look at a familiar one, named Percentage Debt (encircled in red). Notice the absence of “(Metric)” which was included to avoid naming collisions. Let’s have a look at the Queries and Rules editor:

It looks almost the same as the new metric we discussed earlier, but please notice the “failif” and “warnif” which makes it a Quality Gate with which you can enforce rules. As it says in the description this Quality Gate will fail if the estimated debt is more than 30%. So in this case when the technical debt is over 30% the built will effectively fail.

Conclusion

I like NDepend’s 2017 release and I’ve highlighted the new features that stuck out to me the most. Visual Studio 2017 has more static code analysis built in than it’s predecessors though, but still not on par with what NDepend offers.

Posted in .NET programming | Leave a comment

ASP.NET Core 1.0: View Components

Synopsis: Child Actions are not even supported anymore and Partial Views are less powerful. Meet ASP.NET Core 1.0’s View Components! Example project to accompany this article can be found here: https://github.com/DannyvanderKraan/ViewComponents

Intro

Child Actions often got you into trouble (if for instance you want to authorize them with an Action Filter and they are used in the _Layout.cshtml) and they were pretty messy as they had to be implemented as methods in your Controller. They participate in the lifecycle and pipeline of the controller they are implemented in. Not always what you want, because they were reachable by HTTP requests as one of the biggest pet peeves! But you needed them to build the backend logic for reusable components in your Views. Partial Views are a cleaner way to make reusable content, but lack the ability to have complex/reusable backend logic.

View Components basically combine the two, because they consist out of a class with the backend logic and a Razor view for presentation [1]. This is more in line with SoC[3] and is much more testable. View Components are responsible for a portion of the response, instead of the entire response, while staying away from the lifecycle and pipeline of controllers. This makes them very light weight. They are basically a tiny controller by themselves.

Side note:Don’t forget about Partial Views though, because sometimes a View Component is like shooting with a cannon and a light weight Partial View is much more appropriate.

Before we look at a class for the back end logic of the View Component, if you want to build along you should add MVC Middleware and set the basics up. Go to the ConfigureServices method of the Startup class and type “services.AddMvc();”, have Visual Studio add the necessary NuGet package to your ‘project.json’ file and add the correct ‘using’ statement. Then in the Configure method add “app.UseMvcWithDefaultRoute();” and make sure the “app.Run(…” statement is commented out if this is still there. Add a “HomeController” class to the “Controllers” folder (as long as it derives from the Controller base class you’re fine). Add an “Index.cshtml” file to the “Views\Home” folder. Make the “HomeController” return the “Index.cshtml”.

Class with Backend Logic

As almost everything with ASP.NET Core 1.0 creating a class for a View Component can be convention based. This can be done by creating a class with the suffix “ViewComponent”, or by decorating the class with the “[ViewComponent]” attribute, or derive from a class with this attribute. The advantage is that you’ll have a POCO class. But you can also derive your class from the ViewComponent base class. This is a class that is decorated with the “ViewComponent” attribute and has the advantage of exposing handy methods and properties. And last but not least, the class must also be public, not nested and not abstract.

So if I derive from ViewComponent a basic implementation looks like:

   public class SomeViewComponent : ViewComponent
    {
	    public async Task<IViewComponentResult> InvokeAsync()
	    {
		    return View();
	    }
    }

You see that despite deriving from ViewComponent I still used the ViewComponent suffix. This is not necessary, but I find this to be a good practice. The suffix is removed by convention and thus this ViewComponent is called “Some”. I could change this to any name by using the “[ViewComponent]” attribute which takes a ‘Name’ parameter (for example: [ViewComponent(Name = “XYZ”)]).

Remember I said View Components are like tiny controllers? Well for the sake of being complete, deriving from the base class is what really makes them like controllers. Just look at the properties at your disposal:

[ViewComponent]
public abstract class ViewComponent
{
    protected ViewComponent();
    public HttpContext HttpContext { get; }
    public ModelStateDictionary ModelState { get; }
    public HttpRequest Request { get; }
    public RouteData RouteData { get; }
    public IUrlHelper Url { get; set; }
    public IPrincipal User { get; }
    public ClaimsPrincipal UserClaimsPrincipal { get; }
    
    [Dynamic]    
    public dynamic ViewBag { get; }
    [ViewComponentContext]
    public ViewComponentContext ViewComponentContext { get; set; }
    public ViewContext ViewContext { get; }
    public ViewDataDictionary ViewData { get; }
    public ITempDataDictionary TempData { get; }
    public ICompositeViewEngine ViewEngine { get; set; }
}

Then you see an “Invoke” method in “SomeViewComponent”. This is also by convention, so the method name and return type are important. The DefaultViewComponentInvoker first searches for a ‘InvokeAsync’ and if it didn’t find this async method it’ll search for the synchronous ‘Invoke’ method (if both are not present it’ll throw an exception). Check this [2] cool answer on Stackoverflow for help with customization of invoking view components and a better understanding of the DefaultViewComponentInvoker.

The Invoke method’s return value is rendered with help from the View method from the ViewComponent base class. It returns a ViewViewComponentResult which implements IViewComponentResult. Which means it implements the Execute and the ExecuteAsync methods. In this case the Execute method gets called eventually which locates and renders the view specified. In this example I didn’t specify a view name, which means it searches for the “Default” view.

Well we don’t have a Default.cshtml yet, so let’s add one!

Razor View

As free as you are with the location of the View Component class (I place them in a ViewComponents folder), so constrictive are the conventions for the View Component views. They are expected to be placed in a folder with the same name as the View Component (in the case of our example it should be called “Some”) which has to be a subdirectory of the “Components” folder. You are however free to add a “Components” folder to every controller folder in the “Views” folder you want (like for instance the “Home” folder), but you ought to design View Components for reusability, so there for the “Shared” folder is a better option. This forces you to keep reusability in mind. So I added a Default.cshtml at: “Views\Shared\Components\Some”.

Side note: However if you do not want to follow the conventions you can place the views anywhere you like, just pass the entire path when you return the view in the viewcomponent’s invoke method.

And the only HTML I added in “Default.cshtml” is:


<h2>Some-Default</h2>

Then in the “Index.cshtml” view add the following Razor statement:

@await Component.InvokeAsync("Some")

This is the statement to invoke View Components with. The first parameter is the name of the View Component, any subsequent parameter should correspond to parameters of the Invoke method of the View Component class. In this example there are no parameters, but later on in this article I’ll show an example with a parameter.

Well when we run this web app the ViewComponent doesn’t do much exciting stuff right now. You’ll just see “Some-Default” in the browser. But this basic example is enough to show the core mechanics behind View Components.

You see, a thing to remember is that View Components don’t use model binding, they use the data provided to them. You can provide parameters when you invoke them as said before or by injecting the dependency you need by using ASP.NET Core 1.0’s native support for Dependency Injection. This means you could for instance add a View Component to the layout page and use it throughout the whole application. Let’s make this example more interesting with some parameters and Dependency Injection.

I added the class “SomeStuff” to my “Models” directory:

public class SomeStuff
{
	public string Id { get; set; }
}

Then I added “SomeRepository” to the same folder:

public class SomeRepository
{
	public List<SomeStuff> GetSomeStuff()
	{
		return new List<SomeStuff>()
		{
			new SomeStuff() {Id = "97CFB273-7388-4D85-85F1-061297C5E9A5"},
			new SomeStuff() {Id = "6BB03A93-FFC0-4F5E-BED6-DC3D87C3FCFD"}
		};
	}
}

As you can see, nothing fancy, just returning two instances of the “SomeStuff” class via the “GetSomeStuff” method. To be able to inject this in the View Component we need to add this ‘service’ to the ‘container’. So the “ConfigureServices” method in the “Startup” class looks like:

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc();
	services.AddTransient<SomeRepository>();
}

This is the simplest way to add a service, as a transient, which ‘news up’ an instance of SomeRepository every time SomeRepository is requested. Dependency Injection in ASP.NET Core 1.0 is worth a separate article, so I am not going into the details right now. Simply modify “SomeViewComponent” like this to request “SomeRepository” via ‘constructor injection’:

private SomeRepository SomeRepository { get; set; }

public SomeViewComponent(SomeRepository someRepository)
{
	if(someRepository == null) throw new ArgumentNullException(nameof(someRepository));
	SomeRepository = someRepository;
}

And change the “Invoke” method as follows:

public async Task<IViewComponentResult> InvokeAsync()
{
	return View(SomeRepository.GetSomeStuff());
}

This will pass an instance of the List of SomeStuff as a parameter to the “Default” view via the base class’s “View” method. “Default.cshtml” looks like:

@model List<ViewComponents.Models.SomeStuff>


<h2>Some-Default</h2>


<ul>
    @foreach (var stuff in Model)
    {

<li>@stuff.Id</li>

    }
</ul>

Which will display the two GUIDs you saw earlier beneath each other in the browser when you run this web app.

But you can also pass parameters to the ViewComponent. To demonstrate this you need to add a parameter the “Invoke” method to “SomeViewComponent”:

public async Task<IViewComponentResult> InvokeAsync(string id)
{
	return View(SomeRepository.GetSomeStuff().Where(s => s.Id.Equals(id)).ToList());
}

The “Index.cshtml” looks like:


<h1>Home-Index</h1>

@await Component.InvokeAsync("Some", new {id = "97CFB273-7388-4D85-85F1-061297C5E9A5"})

As you can see I can simply call invoke as you would with any C# method. The only difference is that it always starts with a ‘name’ parameter for the appropriate View Component. Any subsequent parameter can be passed through with an anonymous object. So imagine the Invoke method has next to the id parameter also an id2 parameter, you’d invoke it as follows:


<h1>Home-Index</h1>

@await Component.InvokeAsync("Some", new {id = "97CFB273-7388-4D85-85F1-061297C5E9A5", id2 = "199b1ef5-b0e1-46ef-8092-e7006ddf7dd1"})

Conclusion

View Components are a very powerful addition. Every time a Partial View just doesn’t do the job you can use a View Component.
They differ from Child Actions that you can’t bind the model anymore, but instead pass parameters to the component. They don’t participate in a controller’s lifecycle, so no Action Filters. And they are not reachable via HTTP like Child Actions were. But in return you get a more reusable and powerful component which adheres more to the SoC principle, because you can simply inject your dependencies or pass in the parameters you need. You can find the examples from this article on my GitHub[4].

Posted in .NET programming, ASP.NET MVC, ASP.NET Core 1.0 | Tagged , , , | Leave a comment

ASP.NET Core 1.0: Web API Automatic Documentation with Swagger and Swashbuckle

Synopsis: In this article I will help you with getting automatically generated and interactive documentation going for your ASP.NET Core Web API by using Swagger and Swashbuckle. As of December 23rd upgraded to latest version. We’ll touch the following subjects:

  • Getting started
  • Multiple versions
  • Complex models
  • Adding XML notations
  • Multiple Responses

Want to read this in Dutch instead of English then click here. Small caveat, that article is not upgraded to the latest version!

Intro

The problem with any REST API is that you need to maintain the documentation describing the API manually (or even if you have it generated from code by some tool it’s still semi-automatic). So it will often be out of date, inaccurate, prone to mistakes and not interactive. To the rescue comes Swagger, which provides a powerful representation of your RESTful API, thus providing automatic documentation. Open sourced Swagger is platform-agnostic and this article is specifically about adding the power of Swagger to an ASP.NET Core Web API. In order to be able to do that I’d need an implementation specifically for this platform, which luckily is provided by the open source implementation called Swashbuckle 6.0.0.

Getting started

Swashbuckle seamlessly adds Swagger by combining the built in Api Explorer from ASP.NET Core MVC and Swagger’s swagger-ui to enable discovery and generate interactive documentation for your API’s users. Remember that at the time of writing it’s a release candidate and the interface might still shift and change in the future. But as for now the best way to get started is to install the NuGet packages called “Swashbuckle.SwaggerGen” and “Swashbuckle.SwaggerUi” both version “6.0.0-rc1-final”. There is only one NuGet package which contains the Swagger generator and Swagger-ui embedded. Add the package Swashbuckle 6.0.0-beta902 to the dependencies section of the project.json file:

    "Swashbuckle": "6.0.0-beta902"

Side note: They also have a meta-package called “Swashbuckle” which installs these two NuGet packages for you as well, but they are both older beta releases. Also, no sign of packages for RTM at the moment of writing this, but let’s hope the API doesn’t change.

Once this is done we’ll need to configure Swagger in the Startup class. First we’ll add the services needed in the ConfigureServices method of the StartUp class to generate the files Swagger needs, like this:

services.AddSwaggerGen();

This will register an implementation of ISwaggerProvider with the default settings. Then in the same class in the Configure method you need to add the middleware to the HTTP request pipeline like this:

app.UseSwagger();

This will serve the generated swagger as a JSON endpoint. If you would start up the web application now and you’d navigate to the standard route which is: “/swagger/v1/swagger.json”, you’d see the following JSON:

{"swagger":"2.0","info":{"version":"v1","title":"API V1"},"basePath":"/","paths":{},"definitions":{},"securityDefinitions":{}}

Side note: You can change this standard route as an option of the UseSwaggerGen extension method.

Well, let’s put Swagger to work by adding an MVC controller. Add the following code to the ConfigureServices method of the StartUp class:

services.AddMvc();

Add the following code to the Configure method of the StartUp class:

app.UseMvcWithDefaultRoute();

Add a folder named “Controllers”.

Add a class to this folder named “HomeController” and have it inherit from Controller. Add the following action method to the Home controller:

[HttpGet("About")]
public ContentResult About()
{
	return Content("An API to sample Swagger with Swashbuckle in ASP.NET Core.");
}

Now run the web application again and navigate to “/swagger/v1/swagger.json” and you should see the following JSON:

{"swagger":"2.0","info":{"version":"v1","title":"API V1"},"basePath":"/","paths":{"/About":{"get":{"tags":["Home"],"operationId":"AboutGet","produces":[],"responses":{"200":{"description":"OK"}},"deprecated":false}}},"definitions":{},"securityDefinitions":{}}

Notice how the key “paths” has an entry now, due to standard API discovery which Swashbuckle utilizes to generate the swagger.json to fuel the Swagger engine. To utilize swagger-ui and get interactive documentation you add the following line of code to the Configure method in the Startup class:

app.UseSwaggerUi();

This will add middleware to the HTTP request pipeline which will generate the interactive documentation based off of the swagger.json. So if you navigate to “swagger/ui” it will look like this:

figure1_swaggerui

Side note: UseSwaggerUI takes two parameters, “baseRoute” (to change the “swagger/ui” default route) and “swaggerURL” (to change the default location “/swagger/v1/swagger.json” where it expects the swagger.json file to be).

The interactive documentation is built up in a RESTful manner. So you first see the link “Home”, because the ‘resource’ “About” is beneath “Home”. If you’d click on it, you’d see the following:

figure2_swaggeruiWe can see it’s a “Get” method and we see the relative path “/About”. If you click on “/About” you’d see:

figure3_swaggerui

As you see it can list the response messages and you can also “Try it out!”. And if you’d click on that you’d see:

figure4_swaggerui

You can see the “CURL”, the “Request URL”, “Response Body”, “Response Code” and even the “Response Headers” with only three lines of code! Well, I think that is pretty sweet!

Multiple Versions

A common problem with API’s for external use is that you don’t know how many people are depending on your API and when you make a breaking change you don’t want to break everyone else’s code and you start versioning your API. Luckily Swagger has support for multiple versions and together with Swashbuckle you can easily configure it. Let’s start with making a second version of our “About” action method on the “Home” controller. Change the action methods as the code below:

[HttpGet("api/v1/About")]
public ContentResult About()
{
	return Content("An API to sample Swagger with Swashbuckle in ASP.NET Core.");
}

[HttpGet("api/v2/About")]
public ContentResult About2()
{
	return Content("An API (v2) to sample Swagger with Swashbuckle in ASP.NET Core.");
}

In the ConfigureServices method of the StartUp class add the following code:

services.AddSwaggerGen(options =>
	{
		options.MultipleApiVersions(new Swashbuckle.Swagger.Model.Info[]
		{
			new Swashbuckle.Swagger.Model.Info
			{
				Version = "v2",
				Title = "API (version 2.0)",
				Description = "A RESTful API to show Swagger and Swashbuckle"
			},
			new Swashbuckle.Swagger.Model.Info
			{
				Version = "v1",
				Title = "API",
				Description = "A RESTful API to show Swagger and Swashbuckle"
			}
		}, (description, version) =>
		{
			return description.RelativePath.Contains($"api/{version}");
		});

	});

Side note: If your Web API is hosted in IIS, you should avoid using full-stops in the version name (e.g. “1.0”). The full-stop at the tail of the URL will cause IIS to treat it as a static file (i.e. with an extension) and bypass the URL Routing Module and therefore, Web API.

The “ConfigureSwaggerDocument” “AddSwaggerGen” method is the extension method with which you’ll be able to configure your Swagger document. It takes an Action of SwaggerDocumentOptions SwaggerGenOptions, which has a “MultipleApiVersions” method which takes a collection of Info objects with which you can describe your versions. These are used purely for descriptive purposes.

The real workhorse is the second parameter which is a “Function” which takes an instance of “ApiDescription” and a “version” as a string and is expected to return a boolean, true if the specific version is hit, false if not. Because the solutions for this is not homogenised, Swashbuckle solved it like this and you can resolve the version anyway you like really, because of this clever mechanism. In this example I expect that in the route the version (e.g.: “v1” or “v2”) comes after “api/”.

If everything has been done correctly, you should be able to navigate to “swagger/ui” again and change “v2” in the URL to the swagger.json and press “Explore” to be able to try out version 2 of the About action method:

figure5_swaggerui

Complex Models

So that’s all fine and dandy, but what about more complex use cases where the action method expects an object and perhaps returns an object of some kind. Let me show you what that looks like with Swagger, by adding the following two class first:

public class Something
{
	[JsonProperty("someint")]
	public int SomeInt { get; set; }
	[JsonProperty("somestring")]
	public string SomeString { get; set; }
}

public class SomeResponse
{
	[JsonProperty("someresponseint")]
	public int SomeResponseInt { get; set; }
	[JsonProperty("someresponsestring")]
	public string SomeResponseString { get; set; }
}

Then add the following action method to HomeController:

[HttpPost("api/v1/GiveMeSomething")]
public IActionResult GiveMeSomething([FromBody] Something something)
{
	return Ok(new SomeResponse()
	{
		SomeResponseInt = something.SomeInt,
		SomeResponseString = something.SomeString
	});
}

Navigate to “swagger/ui” again. Notice how “GiveMeSomething” is added. Notice how it discovered that it needs a parameter called “something”. See the “Model Schema” on the right, click on it to add it as a value of parameter “something”. Change the individual values and click “Try it out!” and it should look like:

figure6_swaggerui

Side note: Notice how Swagger uses the JSON names that are declared above the properties.

Adding XML notations

So far the generated documentation hasn’t been very descriptive. Luckily Swashbuckle can utilize XML comments to add documentation to Swagger. First we’ll need to check the “produce outputs on build” “XML documentation file” (VS2015 update 3) checkbox on the “Build” tab of the project properties (or set xmlDoc to true in the buildOptions section in the project.json file):

figure7_produceoutputsonbuild

This will produce the file with the XML comments in it, which Swashbuckle needs to be able to utilize them. Now adjust your code as follows:

services.AddSwaggerGen(options =>
	{
		options.MultipleApiVersions(new Swashbuckle.Swagger.Model.Info[]
		{
			new Swashbuckle.Swagger.Model.Info
			{
				Version = "v2",
				Title = "API (version 2.0)",
				Description = "A RESTful API to show Swagger and Swashbuckle"
			},
			new Swashbuckle.Swagger.Model.Info
			{
				Version = "v1",
				Title = "API",
				Description = "A RESTful API to show Swagger and Swashbuckle"
			}
		}, (description, version) =>
		{
			return description.RelativePath.Contains($"api/{version}");
		});
		options.IncludeXmlComments(pathToDoc);
	});

Side note: The variable pathToDoc is the path to the XML documentation file. Adjust this to your own personal needs.

Now we can add some XML comments to the controller and models:

/// <summary>
/// Default entrypoint of the API.
/// </summary>
public class HomeController: Controller
{
	/// <summary>
	/// Description of what this API is about.
	/// </summary>
	/// <returns></returns>
	[HttpGet("api/v1/About")]
	public ContentResult About()
	{
		return Content("An API to sample Swagger with Swashbuckle in ASP.NET Core.");
	}

	/// <summary>
	/// Give something and it will return a response.
	/// </summary>
	/// <param name="something"></param>
	/// <returns></returns>
	[HttpPost("api/v1/GiveMeSomething")]
	public IActionResult GiveMeSomething([FromBody] Something something)
	{
		return Ok(new SomeResponse()
		{
			SomeResponseInt = something.SomeInt,
			SomeResponseString = something.SomeString
		});
	}

	/// <summary>
	/// Description of what API v2 is about.
	/// </summary>
	/// <returns></returns>
	[HttpGet("api/v2/About")]
	public ContentResult About2()
	{
		return Content("An API (v2) to sample Swagger with Swashbuckle in ASP.NET Core.");
	}
}

/// <summary>
/// Just something to put in the request.
/// </summary>
public class Something
{
	/// <summary>
	/// Just some int.
	/// </summary>
	[JsonProperty("someint")]
	public int SomeInt { get; set; }
	/// <summary>
	/// Just some string.
	/// </summary>
	[JsonProperty("somestring")]
	public string SomeString { get; set; }
}

/// <summary>
/// Just some response to give back.
/// </summary>
public class SomeResponse
{
	/// <summary>
	/// Some int for the response.
	/// </summary>
	[JsonProperty("someresponseint")]
	public int SomeResponseInt { get; set; }
	/// <summary>
	/// Some string for the response.
	/// </summary>
	[JsonProperty("someresponsestring")]
	public string SomeResponseString { get; set; }
}

And if you’d navigate again to “swagger/ui” you should see something like:

figure8_xmlcomments

Please note that “Model” is clicked instead of “Model Schema” beneath “Data Type” to make the XML comments visible on the model. Furthermore, note the XML comments visible behind the relative URL’s.

We are almost done with the basics, but before I end this article we should really look at the response, which is still at its bare minimum right now.

Multiple Responses

What if we’d adjust the “GiveMeSomething” method to check if “SomeInt” is below 50 and return a BadRequest if it isn’t, as follows:

/// <summary>
/// Give something and it will return a response.
/// </summary>
/// <param name="something"></param>
/// <returns></returns>
[ProducesResponseType(typeof(SomeResponse), 200)]
[ProducesResponseType(typeof(BadRequestResultObject), 400)]
[HttpPost("api/v1/GiveMeSomething")]
public IActionResult GiveMeSomething([FromBody] Something something)
{
	if (something.SomeInt <= 50)
	{
		return Ok(new SomeResponse()
		{
			SomeResponseInt = something.SomeInt,
			SomeResponseString = something.SomeString
		});
	}
	else
	{
		return new BadRequestObjectResult(new SomeResponse()
		{
			SomeResponseInt = 0,
			SomeResponseString = string.Empty
		});
	}
}

Now it returns two responses, an OK status and a BadRequest. For Swashbuckle to be able to utilize this in the documentation you need to add the SwaggerResponse ProducesResponseType attribute. It’ll take a status code it’s triggered on, and a type of the response. If you provide a type it’ll enable Swashbuckle to utilize the XML comments on the model again. So if you’d navigate to “swagger/ui” again it should look like:

figure9_multipleresponses
Side noteImage may not be entirely accurate anymore.

You see the main response, in this case “Response Class (Status 200), right beneath the relative URL “/api/v1/GiveMeSomething”. As you can see the XML comments are visible for “someresponseint” and “someresponsestring”. And beneath the “Parameters” section you see the “Response Messages” section which will list all the other responses. In this case it shows us we can expect a HTTP Status Code of 400 and the description is shown under “Reason” and the XML comments are shown again at “Response Model”.

Outro

These are the basic features of Swagger with Swashbuckle on ASP.NET Core which should give you a flying start! If you feel anything is missing, or you have any questions let me know in the comment section. As for now, I hope this helped you getting started!

Posted in .NET programming, ASP.NET Core 1.0 | Tagged , , | 9 Comments

Realworld example ASP.NET Core 1.0’s Middleware

Synopsis: HTTP Modules and HTTP Handlers are not used anymore with ASP.NET Core 1.0. But what if your web app relies on HTTP Modules/Handlers to function? Well, I had a web application that relied on HTTP Modules and a Handler. In this article I will show you how this problem is solved with ASP.NET Core 1.0’s Middleware!

Intro

Middleware are software components that can be integrated into the HTTP pipeline and intercept HTTP requests and responses[1]. To support OWIN, thus decoupling the web application from the web server, ASP.NET Core provides a standard way to implement Middleware as defined by OWIN [2]. Conceptually the way Middleware works looks like:

concept-middleware
Figure 1: Concept Middleware

So if you want to intercept the HTTP pipeline you’ll have to use Middleware because HTTP Modules and Handlers are gone in ASP.NET Core 1.0… Or are they?

AspNetCoreModule as a HTTP Module

If you create a new ASP.NET Core solution you will still find a “web.config” file in the project’s root. The content is as follows:

<?xml version="1.0" encoding="utf-8"?>
<configuration>

  <!--
    Configure your application settings in appsettings.json. Learn more at http://go.microsoft.com/fwlink/?LinkId=786380
  -->

  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
    </handlers>
    <aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false"/>
  </system.webServer>
</configuration>

This file’s only purpose is to allow the web application to run on IIS. This is done by adding the ASP NET Core Module (an new native IIS module) [3] as a handler. This module makes it possible for external processes that listen to HTTP requests, like “dotnet.exe”, to proxy requests into it and have process management from IIS.

Side note:ASP NET Core Module replaces the HttpPlatformHandler introduced with RC1. Just for emphasis of how important the HttpPlatformHandler is, Scott Hanselmann explained a lot more about this generic reverse proxy mechanism [4], which provides endless possibilities. But the team moved forward with a module which has Asp.Net specific features, as opposed to a generic reverse proxy.

IISPlatformHandler as Middleware

To start explaining middleware I’d like to go back to RC1’s IISPlatformHandler. You see, you used to integrate the IIS HttpPlatformHandler middleware to be able to interact with the module. You did this by adding the following line of code to the Configure method of the Startup class:

app.UseIISPlatformHandler();

Side note: Now you add the “UseIISIntegration” extension method to the WebHostBuilder in the static void Main of the Program class.

UseIISPlatformHandler() is actually an Extension Method extending IApplicationBuilder and what it does as copied from the summary is:

“Adds middleware for interacting with the IIS HttpPlatformHandler reverse proxy module. This will handle forwarded Windows Authentication, request scheme, remote IPs, etc..”

And when we look at the sourcecode on GitHub[5] to see what this extension method exactly does for us, we see the following code (at the overload with no options):

if (app == null)
{
    throw new ArgumentNullException(nameof(app));
}

return app.UseMiddleware<IISPlatformHandlerMiddleware>(new IISPlatformHandlerOptions());

As mentioned before UseIISPlatformHandler is just an extension method to hide the complexities of setting up and initializing the IISPlatformHandlerMiddleware middleware. You can set up and add any middleware class to the HTTP pipeline via the UseMiddleware method you see above. All your middleware class needs is a constructor to request via constructor injection the RequestDelegate, as IISPlatformHandlerMiddleware[6] demonstrates:

public IISPlatformHandlerMiddleware(RequestDelegate next, IISPlatformHandlerOptions options)
{
    if (next == null)
    {
        throw new ArgumentNullException(nameof(next));
    }
    if (options == null)
    {
        throw new ArgumentNullException(nameof(options));
    }
    _next = next;
    _options = options;
}

And an Invoke method:

public async Task Invoke(HttpContext httpContext)
{
    UpdateScheme(httpContext);

    UpdateRemoteIp(httpContext);

    var winPrincipal = UpdateUser(httpContext);

    var handler = new AuthenticationHandler(httpContext, _options, winPrincipal);
    AttachAuthenticationHandler(handler);

    try
    {
        await _next(httpContext);
    }
    finally
    {
        DetachAuthenticationhandler(handler);
    }
}

Side note: I will not go into details about what the IISPlatformHandlerMiddleware exactly does, because this is an article about writing your own Middleware.

You could even skip requesting the RequestDelegate in your constructor if you always want your middleware to be the last one in the HTTP pipeline. But all in all, implementing your own Middleware seems easy enough. Let’s review the case quickly for this article and then move on to building the middleware:

Case

This is a case about healthcare in The Netherlands. If anything is unclear please let me know. General practitioners from private practices should be able to log on to a website on which they can leave notes about patients of which they suspect they’ll need to visit a medical centre at night or in the weekend. Every medical centre could potentially be owning a different database on a different server. When the GP logs in, we want to retrieve all the databases and their servers the GP is allowed on, in order to retrieve the patient this GP is allowed to see. We used to do this in a HTTP module and now we want to do this with Middleware.

Writing Middleware

For this example we are going to assume the user is authenticated and we only need to authorize the user, by providing a collection of databases and their servers. So to tackle this problem at a bare minimum I came up with the following middleware as an example:

public class GeneralPractitionerAuthorizationMiddleware
{
	private RequestDelegate Next { get; }
	private IGeneralPractitionerAuthorizationRepository Repository { get; }
	private CurrentGeneralPractitionerService Service { get; }
	private CurrentGeneralPractitionerAuthorizationsService AuthorizationService { get; }

	public GeneralPractitionerAuthorizationMiddleware(RequestDelegate next, 
		IGeneralPractitionerAuthorizationRepository repository,
		CurrentGeneralPractitionerService service,
		CurrentGeneralPractitionerAuthorizationsService authorizationService)
	{
		if (next == null) throw new ArgumentNullException(nameof(next));
		if (repository == null) throw new ArgumentNullException(nameof(repository));
		if (service == null) throw new ArgumentNullException(nameof(service));
		if (authorizationService == null) throw new ArgumentNullException(nameof(authorizationService));
		Next = next;
		Repository = repository;
		Service = service;
		AuthorizationService = authorizationService;

	}

	public async Task Invoke(HttpContext context)
	{
		var generalPractitionerAuthorizations = Repository.GetAuthorizationsForGeneralPractitioner(Service.GetId());
		AuthorizationService.Authorizations = generalPractitionerAuthorizations;
		await Next.Invoke(context);
	}
}

You see the RequestDelegate as discussed before, because I want to call the next middleware in the chain.
Then a Repository, which is going to pretend to retrieve a collection of databases and their servers from some persistent storage.
Then a fake Service, which holds the Identifier of the currently logged in general practitioner.
And an AuthorizationService to simply hold the retrieved collection in memory, so this example didn’t get polluted with all kinds of non-middleware related material.

Side note: Please keep in mind that it’s just an example to explain Middleware. Authorization should be done via claims and kept with the ClaimsPrincipal via Cookie middleware or another means of persistence.

I request all these items via the constructor, because middleware uses the native ability of ASP.NET Core to utilize Dependency Injection, which I’ll discuss later.

Then you see the Invoke method which all Middleware classes need to have. The implementation of the Invoke method doesn’t matter, you can check that out at the GitHub repository if you wish[7]. The line of code that does matter however is: “await Next.Invoke(context);”, which makes sure the pipeline continues.

Side note: Important to note is that you can also have logic after the “await Next.Invoke(context);”, to process some more logic when the response returns.

I’ve followed the convention to hide the details about how to use this middleware in an extension method, like so:

public static class GeneralPractitionerAuthorizationMiddlewareExtensions
{
	public static IApplicationBuilder UseGeneralPractitionerAuthorizationMiddleware(this IApplicationBuilder builder)
	{
		return builder.UseMiddleware<GeneralPractitionerAuthorizationMiddleware>();
	}
}

It’s not that complex right now, I simply call the “UseMiddleware” method, which handles everything, from hooking it up into the pipeline, to handling the dependencies. Still need to register those dependencies though, and I’ve followed the same pattern of hiding the details in an extension method again:

public static class GeneralPractitionerAuthorizationServiceCollectionExtensions
{
	public static IServiceCollection AddGeneralPractitionerAuthorization(this IServiceCollection services)
	{
	services.AddSingleton<IGeneralPractitionerAuthorizationRepository, GeneralPractitionerAuthorizationRepository>();
        services.AddSingleton<CurrentGeneralPractitionerService>();
			services.AddSingleton<CurrentGeneralPractitionerAuthorizationsService>();
			return services;
		}
	}

Nothing fancy I reckon. Just added the types as singletons. Of course in a real world application you should be careful with the singleton, but it made this example clean and simple. You can return the instance of IServiceCollection to keep the API fluent if you wish.

Now you can add this line of code to the ConfigureServices method of the Startup class:

services.AddGeneralPractitionerAuthorization();

And this line of code to the “Configure” method:

app.UseGeneralPractitionerAuthorizationMiddleware();

Middleware without writing a class

The request pipeline is built by using Request Delegates, which you saw in action earlier. What if your case is so simple that you don’t want to write an entire class? Well then you can configure these delegates using the “Run”, “Map” and “Use” extension methods from IApplicationBuilder. Often the Request Delegates are then specified using in-line anonymous methods. A simple example of the “Use” extension method could be:

app.Use(async (context, next) =>
{
	context.Request.Headers.Add("SomeExampleKey", "SomeExampleValue");
	await next.Invoke();
});

You could for instance add a value to the HTTP headers of the request, or log information, and many more possibilities. An example of the “Run” extension method is in every project you create:

app.Run(async (context) =>
{
	await context.Response.WriteAsync("Hello World!");
});

Keep in mind that a “Run” middleware is always the last middleware in the pipeline, because it doesn’t call a Request Delegate and formulates the response first.

If you need to branch off the request pipeline you need “Map”. You can check out an example of “Map” (or to be more accurate “MapWhen”) in the referenced example on my GitHub.

Conclusion

Writing your own reusable middleware is very easy. All it needs is an Invoke method, and in most cases a Request Delegate. ASP Handlers and Modules are not platform agnostic and therefor needed to go. If you ask me, what we have now is infinitely better. Less brittle configurations in dodgy XML files. It is cross platform, you can even be compliant with OWIN if you must. Go check out my sample on GitHub if you haven’t already! Small caveat, I’ve used it a little bit as my own playground, but so can you. Thank you for reading!

Links

  1. Middleware: https://docs.asp.net/en/latest/fundamentals/middleware.html
  2. OWIN: https://docs.asp.net/en/latest/fundamentals/owin.html
  3. AspNetCoreModule: https://github.com/aspnet/Announcements/issues/164
  4. HttpPlatformHandler with Scott Hanselman: https://channel9.msdn.com/Shows/Azure-Friday/The-HTTP-Platform-Handler-with-Scott-Hanselman
  5. IISPlatformHandlerMiddlewareExtensions: https://github.com/aspnet/IISIntegration/blob/c93e4f09f2e5648c70bfa86748911f7ea0c18148/src/Microsoft.AspNet.IISPlatformHandler/IISPlatformHandlerMiddlewareExtensions.cs
  6. IISPlatformHandlerMiddleware: https://github.com/aspnet/IISIntegration/blob/c93e4f09f2e5648c70bfa86748911f7ea0c18148/src/Microsoft.AspNet.IISPlatformHandler/IISPlatformHandlerMiddleware.cs
  7. Middleware sample on GitHub: https://github.com/DannyvanderKraan/MiddleWareSample
Posted in .NET programming, ASP.NET Core 1.0, ASP.NET MVC, C#, C# 6.0, Dependency Injection, DNX | Tagged , , , , , | Leave a comment

NDepend and why .Net developers need it

Intro:

Let me start this article with mentioning that I had exactly the same sentiments as Henry Cordes in 2009. I was also approached by C# MVP Patrick Smacchia to look at the tool NDepend he developed, and after a quick search on Google I got really excited to check it out! Because, next to the fact that I was also honoured he considered me, I was also really curious about what NDepend could do for me.

NDepend in a nutshell:

Quality of software is something software engineers can debate forever. There is functional quality of software, which is about the fit for purpose of the software. Often measured by acceptance testing and such practices. Then there is the quality of the static structure, which concerns aspects such as robustness, maintainability, and so forth. One of the ways we can measure this is by analysing the code. And this is exactly where NDepend offers superior support, which will be the focus of this article.

Features:

So how does NDepend help you analysing your code and thus help you improve the quality? NDepend does this by providing a lot of features! In order to check them out I downloaded and installed NDepend as explained here for instance (NDepend has nice documentation also but I found that blog post to be very helpful). I used the Visual Studio integration, because I like having my development software at one place. I opened one of our old tools which needed some work done and has become a WinForms legacy application. I wanted to see if NDepend can help me whip the application in shape again. I’ll take you with me on this small journey and braze over some of the features to see what NDepend is about.

Dashboard

The first thing you’ll want to do in order to use NDepend is “Attach new NDepend project to VS solution”:
NDepend-figure1
Figure 1

This will allow you to choose which assemblies you want analysed in the next dialog. There is very good support on NDepend’s website, so I won’t include every screenshot. But you’ll usually want to start with the dashboard, which in my case looked like the following screenshot:
NDepend-figure2
Figure 2

You also have the option to let NDepend make a report in HTML, which looks like the following screenshot:
NDepend-figure3
Figure 3

When you see this for the first time it can be quite a daunting task to take in all the metrics. Of course not with such a small application as the one I’m using for this article. But I’ve also let loose NDepend on one of our bigger applications and then the numbers attack you at first. Which ones do you care about? I have my own ideas about code comments, so I’m not really bothered with comment coverage, for instance. But my interest was immediately piqued by the rule violations. But what are those rules about then? And are there rules I care more about than others? Questions you need to figure out with your development team when you talk about the quality of your code, make up the code standards and see how you can fit in NDepend to guard these for you.
In order to answer these questions now for this blog post, I’ll look at some of the metrics NDepend can dig up for you.

Code metrics

You can go scientific about code metrics and perhaps have this placemat to enjoy your lunch on, but for the clarity of this article let’s just pick a few metrics of the 82 NDepend provides and judge them on face value.

Cyclomatic Complexity

This metric is basically about the number of decisions in a procedure (think ‘if’ statements). Both Visual Studio and NDepend provide this code metric, but NDepend takes this a step further by also providing the IL cyclomatic complexity which is language independent. NDepend recommends (in short) for the CC no higher than 30 and for the ILCC no higher than 40. While Microsoft will report a violation if it’s higher than 25. With NDepend if I want to adjust the thresholds I can simply adjust the rule which is constructed with NDepend’s “CQLinq” which I’ll get back to in a moment. In Visual Studio you can only customize rulesets not independent rules (see discussion on Stack Overflow here).

Depth of Inheritance Tree

The DIT is a metric which indicated the number of base classes a class inherits from. This metric could say something, or could say nothing at all. What I mean by that is that sometimes when you inherit from a class in the .NET framework for instance, or a third party library, this metric could be high. But inheriting from base classes from a framework is often not a concern for maintainability. But if you make complex inheritance like this in your own code base (and your application is not a framework) then maintaining the code base can become challenging over time and it’s widely accepted to consider Composition via Dependency Injection. NDepend recommends no higher than 6 for this metric, I’d personally say keep it equal or lower than 3 as a rules of thumb.

Immutability & Purity assertion

This is not a metric per se, but actually a set of dedicated CQLinq (more about this in the next paragraph) conditions and rules, which assert immutability and purity on your classes and methods. Environments become increasingly multi-threaded, where immutable objects (state doesn’t change) and pure methods (execution doesn’t change field states) are necessary for controlling potentially negative side-effects. To highlight one metric, I’ve chosen for this article to grab a relatively simple rule, called: “Fields should be marked as ReadOnly when possible”. Fields that are only assigned by the constructors of their class are marked as potentially immutable by NDepend, asserted by the IsImmutable metric. If these potentially immutable fields are not marked as ReadOnly, then this rule will cause a warning. And there are eleven rules filled with metrics like this one to assert the immutability and purity of your code base.

CQLinq

Code Query LINQ is in my humble opinion one of NDepend’s major selling points. It’s a language based on .NET C#’s LINQ which lets you query your code base. Give or take 200 default queries and rules are provided with each new NDepend project, which you can adapt if need be. The CQLinq editor is just as much a pleasure to work with as the normal code editor from Visual Studio as it provides code completion and intellisense. It provides adequately descriptive information in cases where compile errors occur. And last but not least the documentation is integrated via the tooltip, which is actually pretty handy!
So for instance if I take the rule: “Avoid making complex methods even more complex (Source CC)” which uses the aforementioned Cyclomatic Complexity we can see that the rule is violated at 6:
NDepend-figure4
Figure 4

We can easily change this to 5 for instance if we want the rule to be more restrictive. It is really worth spending time constructing or altering rules to see what you can get out of CQLinq and construct your own sets to comply with your Definition of Done.

Trend Monitoring

About 50 trend metrics are available per default and more to make with CQLinq. Some trend charts are on the dashboard per default, but they can be made yourself and put on the dashboard. One such trend chart is the default “Max” trend chart, which keeps track of the maximum values of four metrics:
NDepend-figure5
Figure 5

I could for instance deduct from the blue line that the maximum lines of code for a method has slightly increased, making a method even larger. I should really get this number down by refactoring, to decrease the complexity of the code and increase maintainability. Well, the idea of trends is fairly straightforward and clearly explained on NDepend’s website.

Dependency Management

To obtain an overview of the coupling in your code base NDepend provides a Dependency Graph, which I generated for the subject application of this article:
NDepend-figure6
Figure 6
Remember that we are after “high cohesion” and “low coupling”. Concerning the low coupling I can see in this graph that the coupling only occurs on framework related components, so this is not really something to be alarmed about. As for the high cohesion I can see that the entire application has been stuffed into one big “god class” (see: Sod.Notificare.NumberVerifier). And this is something to investigate further. Because elements with low cohesion bundled up in the same component can cause maintenance headaches. You can hold your mouse on the component you want more information about:
NDepend-figure7
Figure7

Again providing you everything you need to start improving the quality of your code base. It is obvious this component is going to be refactored into smaller components, improving the cohesion and thus improving maintainability.
You can also view Dependency Cycles in a matrix.

Test coverage

NDepend doesn’t analyse coverage itself, but instead relies on coverage data you can import from a variety of technologies.
I added unit tests to the legacy application I used for this article and analysed the code coverage with NCover. Then I imported the code coverage data in NDepend as described by their documentation. The concerning section on the dashboard now shows:
NDepend-figure8
Figure 8

You will get some additional information from the Change Risk Analyser and Predictor (C.R.A.P.) method code metric, which only executes when some code coverage data is imported from code coverage files:
NDepend-figure9
Figure 9
Which tells me I have some work to do if I want to get the coverage up and the CRAP score down (because the lower this metric is the better it’ll be for maintainability).

Complexity Diagrams

NDepend spits out a couple of diagrams to show various forms of complexity on your code base. The trend charts, dependencies matrix, dependencies graph have been covered in other paragraphs. NDepend can also generate a Treemap metric (read: Heat map/Code metric view) and the Abstractness vs. Instability diagram.

Treemap:

NDepend-figure10
Figure 10

It allows you to specify on what depth (read: Level) you want to analyse the code, which in the above figure is on: “Method”. It allows you to specify which metric you’d like to use for the relative size of the blocks, which in this case is set on: “Lines of code”. And last but not least, which metric is used to determine the colour of the block, which is set on (again): “Cyclomatic Complexity”.
But there’s a reason the metric CC pops up again, because it’s a very useful one. The above chart allowed me to immediately zoom in on the single yellow block that really jumps out and when you hold your cursor over it, it’ll show why:
NDepend-figure11
Figure 11

As you can depict from the information shown in the screen this block is relatively big and has a relatively high CC, which makes it an excellent candidate to start refactoring first. After that I’ll tackle the largest green block on the left and try to refactor it. And this is where the treemap excels for me. If I have a legacy application I need to start maintaining and I want to clean up the code, I’ll definitely fire up this diagram first.

Abstractness vs. Instability diagram:

NDepend-figure12
Figure 12
To put it short (and cut some serious corners) this graph combines two metrics. Abstractness, which basically represents how much the code base depends on abstract classes, interfaces and such so it can cope with changes easier and is very flexible. And Instability, which is about how much the code base is coupled with concrete implementations, making it more inflexible. And when you have extremes in one of these metrics you’ll either hit the “Zone of Pain” or the “Zone of Uselessness”.
This graph is all about balance and the larger your system is, the more important this becomes. In this particular case I got out of this graph that I’ll be changing the application severely with every request that comes in. Which can be a choice. To abstract this application and make it more stable is not worth the return of investment and we’ve chosen to occasionally have more work to do, because the frequency is pretty low.

Versus Reshaper

You are probably wondering how NDepend stacks up against tools you have probably already worked with. I know I was! I am mostly familiar with ReSharper and my instinct was to compare NDepend with this tool.
If NDepend is the Swiss army knife, then Reharper is like a scalpel. In fact, it is actually not appropriate to compare the two tools at all if you dig into it. Remember in this article that I’ve said on several occasions that the conclusion after observing some metric or validating a rule was often to refactor or improve in some other way the code base. So while these analyses across the entire code base were done with NDepend, Resharper really comes in to help me out with the actual groundwork of refactoring, renaming and popping up with handy advice how to improve the code even further almost on a ‘line of code’ level. This is an area that NDepend does not touch, and Resharper will not be able to provide so much information across your entire code base. In other words, as has been perfectly laid out by this article, they complement each other! Use Resharper in your day to day work to boost your productivity and then have NDepend tell you something about the quality of your code with every build. More on this can be read from mister NDepend himself.

Conclusion

When I go out camping, I’ll always want my Swiss army knife with me. And now when I go out developing software, I know I’ll always want NDepend with me! The static code analysis capabilities of Visual Studio itself, FxCop, StyleCop all have their uses, but are not at the level that NDepend offers (see a great discussion about FxCop and StyleCop here). I am starting to sound like some commercial, but that’s not all! You can compare code bases between builds. You can integrate NDepend in your build process, which could be as simple as executing the NDepend Console and catching the exit code. If you want to extend your Swiss army knife you can have a look at NDepend’s API. With NDepend’s API it is easy to develop your own analysis tools based on NDepend’s CQLinq and NDepend.PowerTools. You’ll be able to programmatically do almost anything NDepend does. You can read more about it here.

I have a small concern though. If you are really going to put yourself in the market as the Swiss army knife, then go all the way and provide code coverage data yourself, instead of relying on other tools. I was personally bummed out by having to buy yet another tool like NCover, or upgrade my MSDN subscription, when NDepend isn’t free either.

Having said that, code coverage data is something NDepend doesn’t want to compete on while it has so many other things to offer. Coverage tools are extremely technical and very specific. NCover, dotCover and VS all do a great job. Plus nobody else complained about this actually and users are in general asking to support other tools like NCrunch. In fact, it is the only thing I could come up with to be able to write down a con. In other words, if you care about the quality of your code, you need NDepend!

Posted in .NET programming, .NET tools | Tagged , | 1 Comment

Static File Caching And Cache-busting In ASP.NET Core 1.0 MVC 6

Caching

This is my English version of my original article in Dutch which can be found here. When developing web applications one of the things you will want to do first to improve performance is to cache static files [1] like images and script files and only retrieve them from the server when these files have actually been modified.

There are a number of situations in which a browser needs to check whether a cached entry is valid [2]:

  • The cached entry has no expiration date and the content is being accessed for the first time in a browser session
  • The cached entry has an expiration date but it has expired
  • The user has requested a page update by clicking the Refresh button or pressing F5

Side note: When you start testing the caching of static files, please pay close attention to the last point. Make a link which links back to the page itself instead of refreshing or hitting F5, because else the caching won’t work.

As you could’ve read in the first two situations cached entries can have an expiration date. This used to actually be a date-time stamp in the HTTP header, but nowadays the “cache-control” header “max-age” is much more in use, because it allows you to set an interval [3]. You don’t need to use an explicit expiration date anymore. An interval is much more dynamic. Plus the use of max-age is prefered by modern browsers. So let’s use it!

Side note: According to specifications you shouldn’t set max-age to more than one year.

To be able to set the max-age in the “cache-control” header you first need to add the Static Files Middleware. Middleware deserves its own article and is not the focus of this one. In an ASP.NET Core 1.0 application this is done by adding the following line in the Configure method in the Startup class:

app.UseStaticFiles();

Listing 1: Add Static Files middleware to the HTTP request pipeline

Once you’ve set this up you need to set max-age in the response headers. The UseStaticFiles method accepts StaticFilesOptions. This class has an OnPrepareResponse property [4] which you can use to add or change the response headers. It expects a method that has a single parameter and does not return a value. It can be used to set the max age as follows:

            app.UseStaticFiles(new StaticFileOptions()
            {
                OnPrepareResponse = (context) =>
                {
                    var headers = context.Context.Response.GetTypedHeaders();
                    headers.CacheControl = new CacheControlHeaderValue()
                    {
                        MaxAge = TimeSpan.FromSeconds(60),

                    };
                }
            });

Listing 2: Changing the response header via OnPrepareResponse

As you can see MaxAge expects a TimeSpan which makes it really easy to set it to any interval you want. In this case I’ve just set it to 60 seconds for testing purposes. For production it would be more likely you’ll want to set this value to a year or something. We can check the HTTP headers in for instance the developer tools of Chrome:

200OKfromcache
Figure 1: max-age = 60 and Status Code = 200 OK (from cache)

By examining the headers I can see that the max-age has indeed been set to 60 seconds in the Cache-Control response header and when I retrieved the page again by clicking a link that references the same page (no refresh or F5) I can see that Status Code is 200 OK (from cache). When after 60 seconds I click that same link again I get Status Code 304 Not Modified. Excellent, the time interval has expired and the browser checks the server with a very small request to only ask if the file has been modified. The server answered with a tiny response ‘304 Not Modified’ claiming the file hasn’t been modified:

304NotModified
Figure 2: Status Code = 304 Not Modified

So the large .jpg file didn’t need to be transferred across the wire. This is all coming along nicely right? But what about files that have been modified within the interval? Now we need to talk about Cache-busting.

Cache-busting

Cache-busting is basically preventing the browser to serve a static file from its cache [5]. MVC 6 has a Tag Helper [6] called “asp-append-version” which can be used on HTML tags which involve static files, like the “img” HTML tag. What it does when you set its value to “true” is basically utilizing a common way to force cache-busting, which is by versioning the static file. The Request URL is appended with “?v=o90nYT80kXYsi_vP1o1mIgqmQzrL3QV-ADZlxN_M9_8” by this Tag Helper. What you see here is a query string (hence the ?) with the parameter v (which stands for version) and a hash code after the equal sign (=) which was computed by the server based on the contents of the static file which was being served. So for example:

<img src="~/img/Ahi99Y3LN6EJRBeUu5NB49MAiG_42tnVdeRrYtvr5qNu.jpg" asp-append-version="true" />

The browser will cache the static file with this modified request URL as a reference. When the content of the file is changed the server will compute a new hash code based on the new content and the request URL will actually change for the browser, ‘tricking’ the browser into retrieving the static file from the server instead of from its cache. Let’s see this in action:

200OKFromCache_2
Figure 3: Status Code = 200 OK (from cache) with static file versioning
If we click the link within the interval of max-age we see status code 200 OK, but “(from cache)”! Now look at “Request URL” in figure 3, you can see the versioning of the .jpg file in action.

After the interval from max-age, the browser behaves as expected and needs to ask the server if anything has been modified:
304NotModified_2
Figure 4: Status Code = 304 Not Modified with static file versioning

But what if we are within the max-age interval and the static file is changed? This would mean in this example that the image has been altered. Well, let me show you:
200OK
Figure 5: Status Code = 200 OK with static file versioning

Please compare the Request URL from this screenshot to the Request URL of the screenshot in figure 4. Notice how the hash code is completely different. The browser bypassed its cache and went to the server to retrieve a ‘new’ static file, because to the browser the link is different.

Side note: I got this result by clicking the link to the page itself again, making sure I got a 200 status code OK (from cache). Then I quickly opened the image in Paint and added a line, saved it, and closed it. Then I clicked the link again. Word to the wise, increase your max-age to an hour or something, instead of 60 seconds.

Conclusion

The ease with which you can hook into the HTTP request pipeline in ASP.NET Core 1.0 and add or change the response headers is fantastic. It allows us to set the max-age with minimum effort, enabling the browser’s native capability to cache static files. Then the help you get from MVC 6’s Tag Helper ‘asp-append-version’ to be able to bust that cache when a static file has been modified is really awesome. Once set up you no longer as a developer need to worry about all the low level details and can start creating awesome web applications for your clients!

Links

  1. Static files: https://docs.asp.net/en/latest/fundamentals/static-files.html
  2. Great site about HTTP and caching: http://www.httpwatch.com/httpgallery/caching/
  3. Another great source on HTTP cache headers: http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/
  4. OnPrepareResponse: https://msdn.microsoft.com/en-us/library/microsoft.owin.staticfiles.staticfileoptions.onprepareresponse%28v=vs.113%29.aspx#P:Microsoft.Owin.StaticFiles.StaticFileOptions.OnPrepareResponse
  5. Definition cache-busting: http://digitalmarketing-glossary.com/What-is-Cache-busting-definition
  6. MVC6 and Tag Helpers: https://docs.asp.net/projects/mvc/en/latest/views/tag-helpers/index.html
Posted in ASP.NET Core 1.0 | Tagged | 2 Comments