ASP.NET Core Identity with a custom data store

If you create a new website with ASP.NET Core, and select the option for individual user accounts, you’ll end up with a site that uses the Entity Framework implementation of ASP.NET Core Identity.

I’ve found recently that many people believe that this is all there is to identity 🤷‍♂️

The truth is, “identity”, by which I mean the abstractions provided by Microsoft.AspNetCore.Identity, are incredibly flexible and highly pluggable. It’s far easier to plug your existing users database into these abstractions than it ever was to implement a custom Membership provider 🥳

The first thing to remember is that identity is just a set of common abstractions; by default, the rest of the ASP.NET Core framework (MVC etc) understands these abstractions and will use them for authentication and authorisation.

This means that all you need to do is fill in these interfaces to enable out-of-the-box identity management. Let’s work through an example.

dotnet new classlib -o MyCustomIdentityProvider
dotnet add package Microsoft.AspNetCore.Identity
dotnet add package System.Data.SqlClient
dotnet add package Dapper

Our example is going to use a custom SQL Server database, but you could use anything from Postgres to flat XML files to a WCF service contract to get your data.

In this example we’re going to do a minimal implementation, so we’re going to implement the following interfaces:

There are quite a few interfaces in this namespace, but don’t panic – you only need to implement as many as you need to use (for example, if you’re not using 2-factor authentication, there’s no need to implement IUserTwoFactorStore<TUser>).

The other thing to note is that you need to define your user and role objects first. These don’t have to inherit from anything, so long as they’re classes; you’re free to put whatever you want in here. Let’s sketch out a basic example:

public class MyCustomUser
{
  public Guid UserId { get; set; }
  public string Username { get; set; }
}

public class MyCustomRole
{
  public Guid RoleId { get; set; }
  public string RoleName { get; set; }
}

Note that you could go a step further and make these into immutable objects with a bit more effort – I’ve left all properties as get/set for now because it makes this example simpler, but you can go nuts with creating your classes however you want – the power is in your hands!

So far, so simple. Next, we’re going to create a base class for our implementations that can encapsulate data access, and an options object for configuring it:

public class CustomIdentityOptions
{
  public string ConnectionString { get; set; }
}

public abstract class CustomIdentityBase
{
  private readonly string _connectionString;

  protected CustomIdentityBase(IOptions options)
  {
    _connectionString = options.Value.ConnectionString;
  }

  protected SqlConnection CreateConnection()
  {
    var con = new SqlConnection(_connectionString);
    con.Open();
    return con;
  }
}

That’s quite a bit of boilerplate, but it’ll make our lives easier. Next, let’s implement our first interface – IUserStore. To do this, we’ll create a class that extends our base class and implements IUserStore – then, all we need to do is fill in the blanks!

public class CustomUserStore : CustomIdentityBase, IUserStore
{
  public CustomUserStore(IOptions<CustomIdentityOptions> options)
    : base(options)
  {
  }

  public void Dispose()
  {
  }

  /* WHOAH! That's a LOT of methods to fill in! */
}

Don’t despair! Yes, there are now a lot of methods to fill in. This is actually a good thing. By giving you all of these methods, you now have the power to implement identity with as much flexibility as you’re likely to need. Let’s work through some examples:

public async Task CreateAsync(MyCustomUser user, CancellationToken cancellationToken)
{
  using (var con = CreateConnection())
  {
    await con.ExecuteAsync("INSERT INTO Users ( UserId, Username ) VALUES ( @UserId, @Username )", user);
    return IdentityResult.Success;
  }
}

Because the interface is pretty flexible, you can store the user data however you like – in our example, we use ADO.NET and some Dapper magic. Let’s look at some more methods:

public async Task FindByNameAsync(string normalizedUserName, CancellationToken cancellationToken)
{
  using (var con = CreateConnection())
  {
    return await con.QueryFirstAsync(
      "SELECT UserId, Username " +
      "FROM Users " +
      "WHERE Username = @normalizedUserName",
      new { normalizedUserName });
  }
}

Easy, right? You can see that it would be very straightforward to plug in a web service call or a Mongo query in here, depending on your infrastructure.

Next up, some methods that might look a bit odd, because in our example we implement them by loading properties from the user object itself. Take a look:

public Task GetNormalizedUserNameAsync(MyCustomUser user, CancellationToken cancellationToken)
{
  return Task.FromResult(user.Username.ToLowerInvariant());
}

public Task GetUserIdAsync(MyCustomUser user, CancellationToken cancellationToken)
{
  return Task.FromResult(user.UserId.ToString());
}

public Task GetUserNameAsync(MyCustomUser user, CancellationToken cancellationToken)
{
  return Task.FromResult(user.Username);
}

That looks odd, but remember that we might be using a TUser object which doesn’t contain this information, in which case we’d need to look it up from a datastore. Additionally, it helps the framework know which properties of our TUser object represent the user ID and user name without having to restrict us to a specified base class.

Next, we have some other slightly odd methods:

public Task SetNormalizedUserNameAsync(MyCustomUser user, string normalizedName, CancellationToken cancellationToken)
{
  user.Username = normalizedName;
  return Task.CompletedTask;
}

public Task SetUserNameAsync(MyCustomUser user, string userName, CancellationToken cancellationToken)
{
  user.Username = userName;
  return Task.CompletedTask;
}

Again, these look odd because of the structure of our object, but they would also enable you to call remote services for validation, etc.

If we implemented IUserEmailStore<TUser>, then we could start adding logic for storing email addresses and, importantly, storing whether those addresses are confirmed (useful for scenarios where users must confirm their email before being allowed to log in).


Phew, we’re getting there! Now let’s move on and add support for password validation. We’re going to make a tweak to our user store class:

public class CustomUserStore :
CustomIdentityBase,
IUserStore,
<strong>IUserPasswordStore</strong>

This will now require us to fill in three new methods. In our example, let’s assume that passwords are stored in a separate table (for security reasons):

public async Task SetPasswordHashAsync(MyCustomUser user, string passwordHash, CancellationToken cancellationToken)
{
  using (var con = CreateConnection())
  {
    // TODO: insert or update logic skipped for brevity
    await con.ExecuteAsync(
      "UPDATE Passwords SET PasswordPash = @passwordHash " +
      "WHERE UserId = @userId",
    new { user.UserId, passwordHash });
  }
}

public async Task GetPasswordHashAsync(MyCustomUser user, CancellationToken cancellationToken)
{
  using (var con = CreateConnection())
  {
    return await con.QuerySingleOrDefaultAsync(
      "SELECT PasswordHash " +
      "FROM Passwords" +
      "WHERE UserId = @userId",
    new { user.UserId });
  }
}

public async Task HasPasswordAsync(MyCustomUser user, CancellationToken cancellationToken)
{
  return !string.IsNullOrEmpty(await GetPasswordHashAsync(user, cancellationToken));
}

Again, you can get the password from wherever you want. Perhaps you want to store passwords in a super-secure database with TDE and massively locked-down access; if so then no problem! Just open a different connection. Passwords in a dedicated key store? Fine, we can do whatever we like in this class.

Speaking of passwords and hashing, our datastore uses BCrypt to hash passwords. Let’s implement a basic hasher so that our users can verify their passwords:

dotnet add package BCrypt.Net-Next
public class CustomPasswordHasher : IPasswordHasher
{
  public string HashPassword(MyCustomUser user, string password)
  {
    return BCrypt.Net.BCrypt.HashPassword(password);
  }

  public PasswordVerificationResult VerifyHashedPassword(MyCustomUser user, string hashedPassword, string providedPassword)
  {
    return BCrypt.Net.BCrypt.Verify(providedPassword, hashedPassword) ?
      PasswordVerificationResult.Success :
      PasswordVerificationResult.Failed;
  }
}

For more on using BCrypt and a far more complete implementation of a custom password hasher, see “Migrating passwords in ASP.NET Core Identity with a custom PasswordHasher” by Andrew Lock.


Now we’re going to quickly move on to implement roles – I won’t include all the details because by now this should be pretty familiar. The implementation looks a bit like this:

public class CustomRoleStore : CustomIdentityBase, IRoleStore
{
  public CustomRoleStore(IOptions options)
    : base(options)
  {
  }

/* snip */

  public async Task FindByNameAsync(string normalizedRoleName, CancellationToken cancellationToken)
  {
    using (var con = CreateConnection())
    {
      return await con.QueryFirstOrDefaultAsync(
        "SELECT RoleId, RoleName FROM Roles " +
        "WHERE RoleName = @normalizedRoleName",
      new { normalizedRoleName });
    }
  }

/* snip */
}

Finally, we need to be able to link our users to roles. Go back to our custom user store and add the following line:

public class CustomUserStore :
CustomIdentityBase,
IUserStore,
IUserPasswordStore,
<strong>IUserRoleStore
</strong>

Now we need to implement a bunch of methods, as so:

public async Task AddToRoleAsync(MyCustomUser user, string roleName, CancellationToken cancellationToken)
{
  using (var con = CreateConnection())
  {
    await con.ExecuteAsync(
      "INSERT INTO UserRoles ( RoleId, UserId ) " +
      "SELECT @userId, RoleId " +
      "FROM Roles " +
      "WHERE RoleName = @roleName",
      new { user.UserId, roleName });
  }
}

public async Task GetRolesAsync(MyCustomUser user, CancellationToken cancellationToken)
{
  using (var con = CreateConnection())
  {
    return (await con.QueryAsync(
      "SELECT r.RoleName " +
      "FROM UserRoles AS ur " +
      "JOIN Roles AS r ON ur.RoleId = r.RoleId " +
      "WHERE ur.UserId = @userId",
      new { user.UserId })
    ).ToList();
  }
}

public async Task GetUsersInRoleAsync(string roleName, CancellationToken cancellationToken)
{
  using (var con = CreateConnection())
  {
    return (await con.QueryAsync(
      "SELECT u.UserId, u.Username " +
      "FROM UserRoles AS ur " +
      "JOIN Roles AS r ON ur.RoleId = r.RoleId " +
      "JOIN Users AS u ON ur.UserId = u.UserId " +
      "WHERE r.RoleName = @roleName",
      new { roleName })
    ).ToList();
  }
}

public async Task IsInRoleAsync(MyCustomUser user, string roleName, CancellationToken cancellationToken)
{
  var roles = await GetRolesAsync(user, cancellationToken);
  return roles.Contains(roleName);
}

And that’s it! We now have a custom bare-bones identity provider. To use this, we can just hook it up in Startup.cs, as follows:

services.Configure(options =>
{
  options.ConnectionString = Config.GetConnectionString("MyCustomIdentity");
});

services.AddScoped();
services.AddIdentity()
  .AddUserStore()
  .AddRoleStore();

 

That agile feeling

Do you remember being a junior developer? Or maybe, before you were a developer, being “that one who knows computers and Excel and stuff” in the office?

Remember how when someone said “it would be great if I didn’t have to do [some manual job that could be easily scripted]”, and you came back the next day with a hacked-together Excel macro or Access DB or shell script that did it for them?

Remember how pleased they were?

And, do you remember how they then said, “but it would be great if it could do [some other thing]” and you came back to them an hour later and showed them a magic button that did just that?

Remember how good that made you feel?

That right there is a short feedback cycle.

That’s what we’re trying to achieve every day with agile.

Using Dapper Output Params for Optimistic Concurrency

Ever had a problem with two users editing the same record? Maybe one of them overwrote the other’s changes.

The answer to this is optimistic concurrency, which is a fancy term for the practice where each entity checks, before saving, that no-one else has updated the record since it was originally loaded.

As an aside, “pessimistic concurrency” is so-called because under this model, the records are locked when someone opens them for editing, and unlocked once the record is saved or the changes discarded. Optimistic concurrency only checks for changes at the point of saving.

In practical terms, this involves adding a column into your SQL database table; this column is updated each time the row is updated. You can do this manually, but SQL Server gives it to you for free using the rowversion data type.

CREATE TABLE Employee (
  Id int identity not null primary key,
  Name nvarchar(200) not null,
  StartDate datetime not null,
  EndDate datetime null,
  ConcurrencyToken rowversion not null
)

The ConcurrencyToken field is technically a byte array, but can be cast to a bigint if you want a readable representation. It’s simply a value that increments every time that the individual row changes.

Let’s say we create a .NET object for this. It looks like this:

public class Employee {
  public int Id { get; set; }
  public string Name { get; set; }
  public DateTime StartDate { get; set; }
  public DateTime? EndDate { get; set; }
  public byte[] ConcurrencyToken { get; set; }
}

Using Dapper, we can write the following simple data access layer:

public class EmployeeDAL
{
  private DbConnection _connection; // assume we've already got this

  public async Task<Employee> GetAsync(int id)
  {
    const string Sql = "SELECT * FROM Employee WHERE Id = @id";
    return _connection.SingleOrDefaultAsync(Sql, new { id });
  }

  public async Task<Employee> InsertAsync(Employee employee)
  {
    const string Sql = @"
INSERT INTO Employee ( Name, StartDate, EndDate )
VALUES ( @Name, @StartDate, @EndDate )";

    await _connection.ExecuteAsync(Sql, employee);
    return employee;
  }

  public async Task<Employee> UpdateAsync(Employee employee)
  {
    const string Sql = @"
UPDATE Employee SET
  Name = @Name,
  StartDate = @StartDate,
  EndDate = @EndDate
WHERE Id = @Id";

    await _connection.ExecuteAsync(Sql, employee);
    return employee;
  }
}

That’s great, but if you ran this, you’d notice that, on inserting or updating the Employee, the Id and ConcurrencyToken fields don’t change. To do that, you’d have to load a new version of the object. Also, the concurrency field isn’t actually doing anything – what’s that about?

Let’s make some changes. In our UpdateAsync method, let’s do:

public async Task<Employee> UpdateAsync(Employee employee)
{
  const string Sql = @"
UPDATE Employee SET
 Name = @Name,
 StartDate = @StartDate,
 EndDate = @EndDate
WHERE Id = @Id
AND ConcurrencyToken = @ConcurrencyToken";

  var rowCount = await _connection.ExecuteAsync(Sql, employee);
  if (rowCount == 0)
  {
    throw new Exception("Oh no, someone else edited this record!");
  }

  return employee;
}

This is crude, but we now can’t save the employee if the ConcurrencyToken field that we have doesn’t match the one in SQL. Dapper is taking care of mapping our object fields into parameters for us, and we can use a convention in our SQL that our parameter names will match the object fields.

However, we still won’t update the concurrency token on save, and when inserting we still don’t know what the ID of the new employee is. Enter output parameters!

public async Task<Employee> InsertAsync(Employee employee)
{
  const string Sql = @"
INSERT INTO Employee ( Name, StartDate, EndDate )
VALUES ( @Name, @StartDate, @EndDate )

SELECT @Id = Id, @ConcurrencyToken = ConcurrencyToken
FROM Employee WHERE Id = SCOPE_IDENTITY()";

  var @params = new DynamicParameters(employee)
    .Output(employee, e => e.ConcurrencyToken)
    .Output(employee, e => e.Id);

  await _connection.ExecuteAsync(Sql, @params);
  return employee;
}

Now after saving, the employee object will have updated Id and ConcurrencyToken values. We’ve used DynamicParameters instead of an object, which has allowed us to map explicit Output params to update the target object. How does it work? You give Dapper a target object, and a function to tell it which property to update. It then looks for a SQL output parameter or result that matches that property, and then uses that knowledge to update the property based on the results from SQL.

 

Thinking about microservices? Start with your database

Hang on, the database? Ahem, I think you’ll find that service contexts should be vertical, not horizontal, that is, based on function, not technology. So, nyah-nyah.

Let me explain.

This post isn’t for you if you happen to be Netflix. Nor is it for you if you’re writing a greenfield application with microservices in mind. No, this post is for the team maintaining a chunk of EnterpriseWare, something relatively dull yet useful and (presumably) profitable. This team has been toying with the idea of microservices, and may well be drawing up elaborate Visio diagrams to define the new architecture.

So, what does this have to do with databases?

A database is a service. In Windows land, it literally is a Windows service (mssql.exe, in point of fact), but in more general terms, it provides a service for managing an application’s data.

A database with a relatively simple schema, where the application layer controls most of the SQL code and most operations are simple CRUD, is more like a RESTful API. A database with a more complicated schema (parent-subcategories, multiple tables involved in atomic transactions) that provides complex stored procedures for the application(s) to use is more like a traditional RPC service. Either way, these things are services with a well-known API that the application uses.

Behold, the monolith!

Your basic monlithic codebase will hopefully include both the source code for the database and the application.

If every release requires your DBA to manually run ad-hoc SQL scripts, then you have far bigger problems to address than microservices. Version your migrations and set up continuous deployment before you start looking to change architecture!

Typically, a new version of the product will involve both database changes, and application changes, for example: adding a new field to a business entity, which may touch the table schema, stored procedure(s), the application tier and the UI layer. Without the schema changes, the app will not function. Without the application changes, the schema changes are, at best, useless, and at worst, broken.

Therefore, deployments involve taking the application down, migrating the database (usually after taking a backup), and then deploying the application code.

&nonbreakingchanges;

Microservices imply an application that composes multiple, independent services into a cohesive whole, with the emphasis being on “independent”. For a team with no real experience of this, a useful place to start is the database. Not only will it teach you valuable lessons about managing multiple parts of your application separately, but, even if you don’t decide to go down the microservice rabbit-hole, you will still have gained value from the exercise.

So, what exactly are we talking about? In practical terms:

  • Create a new repository for your database source code.
  • Create a separate continuous integration/deployment pipeline for your database.
  • Deploy your database on its own schedule, independent of the application.
  • Ensure that your database is always backwards-compatible with all live versions of your application.

Now, this last part is the hardest to do, and there’s no silver bullet or magic technique that will do it for you, but it is crucial that you get it right.

Whenever you make a schema change, whether that be a change to a table or stored procedure or whatever, then that change must not break any existing code. This may mean:

  • Using default values for new columns
  • Using triggers or stored procedures to fake legacy columns that are still in use
  • Maintaining shadow, legacy copies of data in extreme circumstances
  • Creating _V2, _V3 etc. stored procedures where the parameters or behaviour changes

The exact techniques used will depend on the change, and must be considered each time the schema changes. After a while, this becomes a habit, and ideally more and more business logic moves into the application layer (whilst storage and consistency logic remains the purview of the database).

Let’s take the example of adding a new column. In this new world, we simply add a database migration to add the new column, and either ensure that it’s nullable, or that it has a sensible default value. We then deploy this change. The application can then take advantage of the column.

Let’s take stored procedures. If we’re adding new optional parameters, then we can just change the procedure, since this is a safe change. If, however, we are adding new required parameters, or removing existing ones, we would create MyProcedure_V2, and deploy it.

Let’s say we want to remove a column – how do we do this? The simple answer is that we don’t, until we’re sure that no code is relying on it. We instead mark it as obsolete wherever we can, and gradually trim off all references until we can safely delete it.

And this benefits us… how?

The biggest benefit to this approach, besides training for microservices, is that you should now be able to run multiple versions of your application concurrently, which in turn means you can start deploying more often to preview customers and get live feedback rather than waiting for a big-bang, make-or-break production deployment.

It also means that you’re doing less during your deployments, which in turn makes them safer. In any case, application-tier deployments tend to be easier, and are definitely easier to roll back.

By deploying database changes separately, the riskier side of change management is at least isolated, and, if all goes well, invisible to end users. By their very nature, safe, non-breaking changes are also usually faster to run, and may even require no downtime.

Apart from all that, though, your team are now performing the sort of change management that will be required for microservices, without breaking up the monolith, and without a huge amount of up-front investment. You’re learning how to manage versions, how to mark APIs as obsolete, how to keep services running to maintain uptime, and how to run a system that isn’t just a single “thing”.

Conclusion

If your team has up until now been writing and maintaining a monolith, you aren’t going to get to serverless/microservice/containerised nirvana overnight. It may not even be worth it.

Rather than investing in a large-scale new architecture, consider testing the water first. If your team can manage to split off your database layer from your main application, and manage its lifecycle according to a separate deployment strategy, handling all versioning and schema issues as they arise, then you may be in a good position to do that for more of your components.

On the other hand, maybe this proves too much of a challenge. Maybe you don’t have enough people to make it work, or your requirements and schema change too often (very, very common for internal line-of-business applications). There’s nothing wrong with that, and it doesn’t imply any sort of failure. It does mean that you will probably struggle to realise any benefit from services, micro or otherwise, and my advice would be to keep refactoring the monolith to be the best it can be.

ASP.NET Core automatic type registration

A little bit of syntactic sugar for you this Friday!

Let’s say we have an application that uses a command pattern to keep the controllers slim. Maybe we have a base command class that looks a bit like:

public abstract class CommandBase<TModel> where TModel : class
{
  protected CommandBase(MyDbContext db)
  {
    Db = db;
  }

  protected MyDbContext Db { get; }

  Task<CommandResult> ExecuteAsync(TModel model);
}

Using a pattern like this means that we can have very slim controller actions where the logic is moved into business objects:

public async Task<IActionResult> Post(
  [FromServices] MyCommand command,
  [FromBody] MyCommandModel model)
{
  if (!ModelState.IsValid)
    return BadRequest(ModelState);
  var result = await command.ExecuteAsync(model);
  return HandleResultSomehow(result);
}

We could slim this down further using a validation filter, but this is good enough for now. Note that we’re injecting our command model in the action parameters, which makes our actions very easy to test if we want to.

The problem here is that, unless we register all of our command classes with DI, this won’t work, and you’ll see an `Unable to resolve service for type` error. Registering the types is easy, but it’s also easy to forget to do, and leads to a bloated startup class. Instead, we can ensure that any commands which are named appropriately are automatically added to our DI pipeline by writing an extension method:

public static void AddAllCommands(this IServiceCollection services)
{
  const string NamespacePrefix = "Example.App.Commands";
  const string NameSuffix = "Command";

  var commandTypes = typeof(Startup)
    .Assembly
    .GetTypes()
    .Where(t =>
      t.IsClass &&
      t.Namespace?.StartsWith(NamespacePrefix, StringComparison.OrdinalIgnoreCase) == true &&
      t.Name?.EndsWith(NameSuffix, StringComparison.OrdinalIgnoreCase) == true);

  foreach (var type in commandTypes)
  {
    services.AddTransient(type);
  }
}

Using this, we can use naming conventions to ensure that all of our command classes are automatically registered and made available to our controllers.

Logging traces in ASP.NET Core 2.0 with Application Insights

Application Insights (AI) is a great way to gather diagnostic information from your ASP.NET app, and with Core 2.0, itʼs even easier to get started. One thing that the tutorials don’t seem to cover is how to see your trace logs in AI. By “tracing”, I mean things like:

_logger.LogInformation("Loaded entity {id} from DB", id);

This can be an absolute life-saver during production outages. In simpler times, we might have used DebugView and a TraceListener to view this information (in our older, on-prem apps, we used log4net to persist some of this information to disk for more permanent storage). In an Azure world, this isn’t available to us, but AI in theory gives us a one-stop-shop for all our telemetry.

Incidentally, itʼs worth putting some care into designing your tracing strategy — try to establish conventions for message formats, what log levels to use when, and what additional data to record.

You can see the tracing data in the AI query view as one of the available tables:

tracing_table
The AI “traces” table

For our sample application, we’ve created a new website by using:

dotnet new webapi

We’ve then added the following code into ValuesController:

public class ValuesController : Controller
{
  private ILogger _logger;

  public ValuesController(ILogger<ValuesController> logger)
  {
    _logger = logger;
  }

  [HttpGet]
  public IEnumerable Get()
  {
    _logger.LogDebug("Loading values from {Scheme}://{Host}{Path}", Request.Scheme, Request.Host, Request.Path);
    return new string[] { "value1", "value2" };
  }
}

We have injected an instance of ILogger, and we’re writing some very basic debugging information to it (this is a contrived example as the framework already provides this level of logging).

Additionally, we’ve followed the steps to set our application up with Application Insights. This adds the Microsoft.ApplicationInsights.AspNetCore package, and the instrumentation key into our appsettings.json file. Now we can boot the application up, and make some requests.

If we look at AI, we can see data in requests, but nothing in traces (it may take a few minutes for data to show up after making the requests, so be patient). What’s going on?

By default, AI is capturing your application telemetry, but not tracing. The good news is that itʼs trivial to add support for traces, by making a slight change to Startup.cs:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory logger)
{
  loggerFactory.AddApplicationInsights(app.ApplicationServices, LogLevel.Debug);
}

We have just added AI into our logging pipeline; the code above will capture all messages from ILogger, and, if they are at logging level Debug or above, will persist them to AI. Make some more requests, and then look at the traces table:

tracing_results1.PNG

One further thing that we can refine is exactly what we add into here. You’ll notice in the screenshot above that everything is going in: our messages, and the ones from ASP.NET. We can fine-tune this if we want:

loggerFactory.AddApplicationInsights(app.ApplicationServices, (category, level) =>
{
   return category.StartsWith("MyNamespace.") && level > LogLevel.Trace;
});

You’re free to do any filtering you like here, but the example above will only log trace messages where the logging category starts with one of your namespaces, and where the log level is greater than Trace. You could get creative here, and write a rule that logs any warnings and above from System.*, but everything from your own application.

Traces are one of the biggest life-savers you can have during a production issue — make sure that you donʼt lose them!

VSTS agile work item types explained

At Sun Branding Solutions, we use the Agile template for our work in Visual Studio Team Services. We’ve been using this for a while, but people still occasionally ask what the difference is between epics, features, stories and tasks, so here are my thoughts on when and why to use the available item types.

Diagram of work item levels

Epics

If work items were books, then epics are The Lord of the Rings. An epic should represent a strategic initiative for the organisation – for example, “let’s go mobile”, or adding a brand new area of functionality. They may also be technical in nature, such as moving to the cloud, or a major re-skinning, but they should always be aligned to business goals.

An epic probably won’t have much description, and certainly shouldn’t include detailed specifications (that’s up to the Product Owner and their team of Business Analysts to produce).

Epics will span several sprints. Try not to add too many to the backlog, as that cheapens their impact. You probably wouldn’t have more than a handful within a year.

Epics are split into multiple…

Features

Going with our book metaphor, these are an individual Harry Potter novel – something part of a greater whole, but still a self-contained unit of value. The feature is defined by the Product Owner, with the help of BAs, key customers and end users. An example might be “upload files from OneDrive” – or, to put it another way, a feature is a bullet point on your marketing literature, whilst an epic is a heading.

If the product roadmap is largely defined by your customers, then features are the things that your customers really care about. If your roadmap is internal, then features are what your managers care about.

A feature may span multiple sprints, although ideally not more than three.

Features are broken down into…

User Stories

“As an X, I want to Y, so that I can Z.”

The BA will typically break a feature down into the individual requirements, which become user stories. A user story should be a distinct chunk of work that can be delivered and provide some value to the product.

Do not be tempted to split stories into “database architecture”, “data access layer”, “UI” – stories should reflect user requirements. Use tasks for development work items.

Stories have acceptance criteria that represent the requirements the developers must meet. These are absolutely crucial, because they are how the output will be tested and judged.

At this level, we also have…

Bugs

Probably the easiest to define, a bug is a defect that has been noticed and triaged. Bugs have reproduction steps and acceptance criteria. Typically, your QA team will own bugs, but your Product Owner, BA or Product Consultant may also be interested in specific bugs that affect them or their customers.

Beware of people using bugs to slip in change requests – these should always be handled as User Stories instead!

Both User Stories and Bugs break down into…

Tasks

A task is the only type of work item that counts towards the burn-down. Tasks have original and current estimates, the amount of time put in, and the amount of time remaining (it’s the remaining time that counts to the burn-down).

Developers own tasks. It is down to the architects and developers to analyse the User Stories (ideally, they should have been involved in drafting the User Stories and even Features to ensure they have a good general understanding), and set up as many tasks as they think are required.

The level of detail in tasks will vary depending on the team – a small, closely-knit team can get away with titles only, whilst a larger team may need more detail.

Tasks have a work type, which defines what type of resource is required:

When setting the capacity of your team, you can select the type of activity that your team members have, and this will help you see how much work of each type you can accommodate within a sprint.

Using the TryGet pattern in C# to clean up your code

Modern C# allows you to declare output variables inline; this has a subtle benefit of making TryFoo methods more attractive and cleaning up your code. Consider this:

public class FooCollection : ICollection<Foo>
{
  // ICollection<Foo> members omitted for brevity
  public Foo GetFoo(string fooIdentity)
  {
    return this.FirstOrDefault(foo => foo.Identity == fooIdentity);
  }
}

// somewhere else in the code
var foo = foos.GetFoo("dave");
if (foo != null)
{
  foo.Fight();
}

Our GetFoo method will return a default of Foo if one isn’t found that matches fooIdentity — code that uses this API needs to know that null indicates that no matching item was found. This isn’t unreasonably, but it does mean that we’re using two lines to find and assign our matching object. Instead, let’s try this approach:

public class FooCollection : ICollection<Foo>
{
  public bool TryGetFoo(string fooIdentity, out Foo fighter)
  {
    figher = this.FirstOrDefault(foo => foo.Identity == fooIdentity);
    return fighter != null;
  }
}

We’ve encoded the knowledge that null means “not found” directly into our method, and there’s no longer any ambiguity about whether we found our fighter or not. Our calling code can now be reduced by a line:

if (foos.TryGetFoo("dave", out foo))
{
  foo.Fight();
}

It’s not a huge saving on its own, but if you have a class of several hundred lines that’s making heavy use of the GetFoo method, this can save you a significant number of lines, and that’s always a good thing.

Lose your self-respect with async

How many times have you written code like this in Java/TypeScript?

function saveData() {
  const self = this;
  $.post("/api/foo", function (response) {
    self.updated(true);
  }
}

The self variable is used to keep a reference to the real calling object so that, when the callback is executed, it can actually call back to the parent (if you used this, then its value would change as the callback executes).

Enter async! Suddenly, you can write:

function saveData() {
  await $.post("/api/foo");
  this.updated(true);
}

That’s an awful lot of lines that you’ve just saved across your codebase!

Ditching typings for NPM @types

Managing TypeScript type files (*.d.ts) for third-party libraries has been a pain for a while; in the distant past of last year, I used the NPM package tsd, which has since been superseded by typings. Neither of these felt particularly nice to use, and I’ve been passively searching for alternatives.

The other day, I found out that NPM has typings support built-in! For example, to install JQuery types, one simply runs:

npm install --save-dev @types/jquery

Boom! If we look under node_modules, we see:

  • node_modules
    • @types
      • jquery
        • index.d.ts

It’s a wonderful thing. But wait! Now TypeScript, or rather Visual Studio, doesn’t know where to find the typings! Horror, whatever shall we do? Not to worry, turns out that TypeScript has us covered. In tsconfig.json, we simply add:

{
  ...
  compilerOptions: {
    ...
    typeRoots: [ "node_modules/@types" ]
  }
  ...
}

(You may need to close/open the project a few times before VS gets the message!)

So it’s goodbye typings – one less tool to worry about on the chain.