On the art of Simplicity

It must use a distributed SOA architecture!

Really? Are you sure that a simple database-models-website solution wouldn’t work just as well for your 100 users, not to mention improving developer productivity by allowing them to run/debug it in one go?

Pro tip: if you factor your application well, such that all large components (e.g. your file storage layer, messaging sub-system, order processor, customer credit checker, etc) are represented by interfaces and passed as parameters, then splitting a ‘monolithic’ application into SOA components becomes quite a simple exercise.

We should think up-front about sharding the database!

Maybe. If the database is well-designed, sharding shouldn’t present a problem when it becomes necessary, but you might find that judicious use of de-normalisation, data warehousing and caching can take you a long way before you need to consider fragmenting the physical schema (millions of rows is not big data!).

We need to make the business rules configurable!

Maybe. See The Daily WTF for some thoughts on why configurability isn’t always a good idea.

I’ve seen so many instances where rules that are intrinsic were made configurable – I once saw a file storage library where users could ‘configure’ file extensions to map to different DLL handlers for thumbnails. Of course, only developers could write and deploy these modules, and the configuration was hard to script, so it became a manual step that had to be done after deployment of a new thumbnail handler.

Hard-coding a price list is probably a bad idea. Soft-coding a set of legal rules, on the other hand, can lead you into a world of hurt if the shape of those rules changes in the future. Where possible, factor the rules into a re-usable library and, if you need to make it configurable, you now only have one object to modify.

What about an Enterprise Service Bus?

No. Just no.

(You might need message queues for background processing, and no harm done. You might even decide on a common queuing engine – be it a library, service or just a pattern – but you soon as you start referring to it as the Enterprise Service Bus, it takes on a gruesome life of its own and summons Cthulhu)

So I was thinking that we should have a common repository pattern where we serialise business objects into DTOs—

What’s wrong with ADO.NET and SQL queries? Or you could get fancy and set up an Entity Framework data context, or the ORM of your choice. Unless you absolutely need some sort of API-based interface, why worry about serialisation (and, when you do need to, then JSON.Net does a brilliant job)?

By all means have a pattern for DALs/repositories, but for smaller projects/simpler databases you might be able to just have a single DAL representing your database.

Let’s build a SPA UI framework!

Let’s not. If you need a SPA, use Angular, Ember etc, but maybe you don’t need one yet – if your data set is small and your customer base isn’t massive, you can probably get away with vanilla ASP.NET MVC/Rails/Node.js/etc. Add Knockout.js or even just some simple jQuery/vanilla JavaScript on top for bits of rich UI. If this starts looking like spaghetti, then it’s time to think about going to the SPA and putting a framework in place.

Hell, I had to re-write a simple data entry system (PHP, 10 users, about a dozen screens, internal only) a couple of years back – I wrote it in ASP.NET WebForms (boom!) in a weekend using GridViews, ObjectDataSources and Membership controls.

But changing code after the fact is hard and messy – this is how systems become Big Balls of Mud!

Ye-es, to an extent. Certainly this happens if you do a half-assed job of it, and try to change piece-meal.

If your system is well-factored with clear separations between layers and components, then it should be simple to upgrade/refactor/change components. If you’re going to move from ASP.NET MVC to Ember.js, then make sure you do it wholesale – change the entire front-end UI and don’t call ‘done’ until it’s all done. If you’re going to move that order processing object into a web service, move the whole thing and delete the old implementation lest others try to use it.

The problem is not change – the problem is half-a-job changes where people think they’re playing it safe by leaving the old stuff there ‘just in case’; in reality, it’s much safer to break the system now rather than leave time-bombs in for the future.

Unit testing all subclasses

This post will demonstrate a simple technique using Visual Studio and NUnit for writing a single unit test that tests all current and future implementations of a base class.

Background

My current project has an abstract class representing an email message; the abstract base contains the basics like ‘to’, ‘CC’, ‘subject’ and so on, and each type of email that the app sends is represented by a concrete class inheriting from the base. In a separate class library, we have a template engine which takes an email object, loads an appropriate template, and uses the awesome RazorEngine to build the email HTML for sending.

For example, we have PasswordResetEmail, which sends users a new password if they forgot theirs; this has UserName and NewPassword properties to be injected into the template.

Email inheritance diagram

For simplicity, the templates are stored as embedded resources in the assembly, so this entire process is eminently unit testable.

Testing all templates

I want to test that the template engine can successfully process each email object; there are a dozen or so of these, and we can envisage another dozen or so being added. Initially, we tested a couple of templates and then called it a day, but recently we found that we had Razor compilation errors in some of the untested ones.

We could have copy/pasted tests for each template, but:

  1. that violates the DRY principle
  2. it’s easy to forget to test new templates
  3. frankly, it just looks messy

Instead, here’s what we wrote:

public static EmailBase[] EmailObjects
{
	get
	{
		var objects = new List();
		
		// get all inheriting types
		var types = typeof(EmailBase).Assembly
			.GetTypes()
			.Where(type => type.IsSubclassOf(typeof(EmailBase)));

		foreach (var type in types)
		{
			// get the default constructor
			var ctor = type.GetConstructor(new Type[] { });
			if (ctor != null)
			{
				objects.Add((EmailBase)ctor.Invoke(new object[] { }));
			}
		}

		return objects.ToArray();
	}
}

[Test]
[TestCaseSource("EmailObjects")]
public void RazorTemplateEngine_RenderHtmlTest(EmailBase model)
{
	ITemplateEngine target = new RazorTemplateEngine();
	string html = target.RenderHtml(model);
	Assert.That(HtmlIsValid(html)); // details excluded for brevity
}

Et voila! With a simple bit of reflection and NUnit’s TestCaseSource attribute, we can automatically pass in all classes which inherit from EmailBase and test that our template engine can render their HTML successfully! If we think of new tests to add, we can use the same technique to apply them to all EmailBase objects in the codebase.

Final thoughts

Unit testing (or TDD if you like) is hard. We’ve noticed a tendency to disregard the normal rules of DRY, SOLID etc when writing tests because they’re just test code – in reality, maintenance of tests is no less important than the maintenance of the code codebase (arguably more so). Always consider whether there’s a more elegant way of writing test code!

This post tests the theme. Nothing to see here.

Literally: I just want to see how the themes look with some actual content. I don’t have the energy to post anything interesting, as it’s work tomorrow and I’m still under the lingering influence of a stinking cold.

On the plus side, I’ve spent the weekend successfully laying a new border in the garden, whilst my laptop worked to migrate the remaining TFS-VC repositories to VSOnline + Git – which went surprisingly smoothly, and I’ll write up the details into a later post.

I have a few posts echoing around inside my head, which I’ll start posting over the next few weeks. In the meantime, here is a pretty picture!

White clematis