ASP.NET Core automatic type registration

A little bit of syntactic sugar for you this Friday!

Let’s say we have an application that uses a command pattern to keep the controllers slim. Maybe we have a base command class that looks a bit like:

public abstract class CommandBase<TModel> where TModel : class
{
  protected CommandBase(MyDbContext db)
  {
    Db = db;
  }

  protected MyDbContext Db { get; }

  Task<CommandResult> ExecuteAsync(TModel model);
}

Using a pattern like this means that we can have very slim controller actions where the logic is moved into business objects:

public async Task<IActionResult> Post(
  [FromServices] MyCommand command,
  [FromBody] MyCommandModel model)
{
  if (!ModelState.IsValid)
    return BadRequest(ModelState);
  var result = await command.ExecuteAsync(model);
  return HandleResultSomehow(result);
}

We could slim this down further using a validation filter, but this is good enough for now. Note that we’re injecting our command model in the action parameters, which makes our actions very easy to test if we want to.

The problem here is that, unless we register all of our command classes with DI, this won’t work, and you’ll see an `Unable to resolve service for type` error. Registering the types is easy, but it’s also easy to forget to do, and leads to a bloated startup class. Instead, we can ensure that any commands which are named appropriately are automatically added to our DI pipeline by writing an extension method:

public static void AddAllCommands(this IServiceCollection services)
{
  const string NamespacePrefix = "Example.App.Commands";
  const string NameSuffix = "Command";

  var commandTypes = typeof(Startup)
    .Assembly
    .GetTypes()
    .Where(t =>
      t.IsClass &&
      t.Namespace?.StartsWith(NamespacePrefix, StringComparison.OrdinalIgnoreCase) == true &&
      t.Name?.EndsWith(NameSuffix, StringComparison.OrdinalIgnoreCase) == true);

  foreach (var type in commandTypes)
  {
    services.AddTransient(type);
  }
}

Using this, we can use naming conventions to ensure that all of our command classes are automatically registered and made available to our controllers.

Logging traces in ASP.NET Core 2.0 with Application Insights

Application Insights (AI) is a great way to gather diagnostic information from your ASP.NET app, and with Core 2.0, itʼs even easier to get started. One thing that the tutorials don’t seem to cover is how to see your trace logs in AI. By “tracing”, I mean things like:

_logger.LogInformation("Loaded entity {id} from DB", id);

This can be an absolute life-saver during production outages. In simpler times, we might have used DebugView and a TraceListener to view this information (in our older, on-prem apps, we used log4net to persist some of this information to disk for more permanent storage). In an Azure world, this isn’t available to us, but AI in theory gives us a one-stop-shop for all our telemetry.

Incidentally, itʼs worth putting some care into designing your tracing strategy — try to establish conventions for message formats, what log levels to use when, and what additional data to record.

You can see the tracing data in the AI query view as one of the available tables:

tracing_table
The AI “traces” table

For our sample application, we’ve created a new website by using:

dotnet new webapi

We’ve then added the following code into ValuesController:

public class ValuesController : Controller
{
  private ILogger _logger;

  public ValuesController(ILogger<ValuesController> logger)
  {
    _logger = logger;
  }

  [HttpGet]
  public IEnumerable Get()
  {
    _logger.LogDebug("Loading values from {Scheme}://{Host}{Path}", Request.Scheme, Request.Host, Request.Path);
    return new string[] { "value1", "value2" };
  }
}

We have injected an instance of ILogger, and we’re writing some very basic debugging information to it (this is a contrived example as the framework already provides this level of logging).

Additionally, we’ve followed the steps to set our application up with Application Insights. This adds the Microsoft.ApplicationInsights.AspNetCore package, and the instrumentation key into our appsettings.json file. Now we can boot the application up, and make some requests.

If we look at AI, we can see data in requests, but nothing in traces (it may take a few minutes for data to show up after making the requests, so be patient). What’s going on?

By default, AI is capturing your application telemetry, but not tracing. The good news is that itʼs trivial to add support for traces, by making a slight change to Startup.cs:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory logger)
{
  loggerFactory.AddApplicationInsights(app.ApplicationServices, LogLevel.Debug);
}

We have just added AI into our logging pipeline; the code above will capture all messages from ILogger, and, if they are at logging level Debug or above, will persist them to AI. Make some more requests, and then look at the traces table:

tracing_results1.PNG

One further thing that we can refine is exactly what we add into here. You’ll notice in the screenshot above that everything is going in: our messages, and the ones from ASP.NET. We can fine-tune this if we want:

loggerFactory.AddApplicationInsights(app.ApplicationServices, (category, level) =>
{
   return category.StartsWith("MyNamespace.") && level > LogLevel.Trace;
});

You’re free to do any filtering you like here, but the example above will only log trace messages where the logging category starts with one of your namespaces, and where the log level is greater than Trace. You could get creative here, and write a rule that logs any warnings and above from System.*, but everything from your own application.

Traces are one of the biggest life-savers you can have during a production issue — make sure that you donʼt lose them!

VSTS agile work item types explained

At Sun Branding Solutions, we use the Agile template for our work in Visual Studio Team Services. We’ve been using this for a while, but people still occasionally ask what the difference is between epics, features, stories and tasks, so here are my thoughts on when and why to use the available item types.

Diagram of work item levels

Epics

If work items were books, then epics are The Lord of the Rings. An epic should represent a strategic initiative for the organisation – for example, “let’s go mobile”, or adding a brand new area of functionality. They may also be technical in nature, such as moving to the cloud, or a major re-skinning, but they should always be aligned to business goals.

An epic probably won’t have much description, and certainly shouldn’t include detailed specifications (that’s up to the Product Owner and their team of Business Analysts to produce).

Epics will span several sprints. Try not to add too many to the backlog, as that cheapens their impact. You probably wouldn’t have more than a handful within a year.

Epics are split into multiple…

Features

Going with our book metaphor, these are an individual Harry Potter novel – something part of a greater whole, but still a self-contained unit of value. The feature is defined by the Product Owner, with the help of BAs, key customers and end users. An example might be “upload files from OneDrive” – or, to put it another way, a feature is a bullet point on your marketing literature, whilst an epic is a heading.

If the product roadmap is largely defined by your customers, then features are the things that your customers really care about. If your roadmap is internal, then features are what your managers care about.

A feature may span multiple sprints, although ideally not more than three.

Features are broken down into…

User Stories

“As an X, I want to Y, so that I can Z.”

The BA will typically break a feature down into the individual requirements, which become user stories. A user story should be a distinct chunk of work that can be delivered and provide some value to the product.

Do not be tempted to split stories into “database architecture”, “data access layer”, “UI” – stories should reflect user requirements. Use tasks for development work items.

Stories have acceptance criteria that represent the requirements the developers must meet. These are absolutely crucial, because they are how the output will be tested and judged.

At this level, we also have…

Bugs

Probably the easiest to define, a bug is a defect that has been noticed and triaged. Bugs have reproduction steps and acceptance criteria. Typically, your QA team will own bugs, but your Product Owner, BA or Product Consultant may also be interested in specific bugs that affect them or their customers.

Beware of people using bugs to slip in change requests – these should always be handled as User Stories instead!

Both User Stories and Bugs break down into…

Tasks

A task is the only type of work item that counts towards the burn-down. Tasks have original and current estimates, the amount of time put in, and the amount of time remaining (it’s the remaining time that counts to the burn-down).

Developers own tasks. It is down to the architects and developers to analyse the User Stories (ideally, they should have been involved in drafting the User Stories and even Features to ensure they have a good general understanding), and set up as many tasks as they think are required.

The level of detail in tasks will vary depending on the team – a small, closely-knit team can get away with titles only, whilst a larger team may need more detail.

Tasks have a work type, which defines what type of resource is required:

When setting the capacity of your team, you can select the type of activity that your team members have, and this will help you see how much work of each type you can accommodate within a sprint.