Fun with T-SQL Computed Columns

First, please take a minute to check out the first video from my old friend Chris Saxon:

This reminded me of a trick I’ve used in a couple of SQL Server databases using a similar technique.

When storing details of people, you often find that the first and last names have their own columns; this makes sorting easier, but if you’re trying to retrieve the whole name as a single column, then it’s a pain to always remember to write FirstName + ' ' + LastName. Initially, I made my life easier by adding this:

CREATE TABLE Person (
Id INT IDENTITY NOT NULL PRIMARY KEY,
FirstName NVARCHAR(100) NOT NULL,
LastName NVARCHAR(100) NOT NULL,
FullName AS FirstName + ' ' + LastName PERSISTED
)

Better – we can now query on our computed FullName column and, because we’ve used the PERSISTED flag, there’s little to no overhead in doing so (the value is stored physically rather than being computed on the fly). However, in a moment of whimsy, I took it a step further:

ALTER TABLE Person ADD NameFormat TINYINT NOT NULL DEFAULT (0)

ALTER TABLE Person ALTER COLUMN FullName AS
  CASE (NameFormat)
    WHEN 1 THEN (LastName + N', ' + FirstName)
    WHEN 2 THEN (SUBSTRING(FirstName, 1, 1) + N'. ' + LastName)
    WHEN 3 THEN (LastName + N', ' + SUBSTRING(FirstName, 1, 1) + N'.')
    ELSE (FirstName + N' ' + LastName)
  END PERSISTED

Depending on the value of the NameFormat field, my name renders as either “Keith Williams”, “Williams, Keith”, “K. Williams”, or “Williams, K.”.

If name formatting was a big deal, then I’d recommend you do it in the application layer rather than SQL (and especially if you’re handling non-European names), but the approach above gives you a convenient way of storing basic formatted names without repeating code or calling scalar functions.

Your team should make mistakes – and take the consequences

Much and more has been written about how technical managers should not impose control on their teams, but instead act as facilitator and coach to the team, whilst the team makes its own decisions, some of which will, inevitably, be wrong.

Having worked at organisations where “autonomy” meant “any way you like, so long as it’s the way I would have done it”, which, given that this ethos came down from the highest echelons, effectively rendered all senior and middle managers irrelevant since all they could do was relay orders, I can wholeheartedly endorse this theory. The more knowledge, control and motivation you have at the lower levels of your people stack, the better the machine will work in aggregate. Put simply, if you made the decision, you are more likely to care about it; the more you care about a decision, the more work you will put in, and the better your quality of decision making.

However, the one thing that is often left out of these writings is the Dark Side of this equation:

Freedom to make decisions is beneficial so long as those making the decisions are exposed to the consequences

Take a lumbering corporate command-and-control behemoth, and try giving the team freedom to make decisions. Said team has been used to mushroom management for so long that they don’t even know who the clients are, let alone what the point of their product actually is. Given them decisions to make, and they’ll make decisions that make their life easier, because that’s the only set of requirements that they understand. When the project crashes and burns, it’ll be the layer above them that takes the consequences. This may filter down to the team, but more likely their boss will be sacked, and the team will go back to being ignored.

Now, take this team, and put them in actual control – make them talk to the clients, have them make the promises, put them on the phones when things go wrong. A developer who implements an ill-thought-out spec from a bit of paper can understandably make a half-arsed job of it, but if said developer is on speaking terms with the client who requested said feature, and knows that the feature will make this client’s life that much easier, then empathy kicks in, and the developer starts to care about their work.

Managers should shield technical teams from the minutiae of detail that will just get in their way otherwise, but if the team is granted the freedom to decide their own destiny, then the team should also know and understand that the freedom to succeed is also the freedom to fail.

How they cope with that failure is perhaps the most important decision that they can make.

Why Red-to-Green projects just add more Red

A Red-to-Green project is one that aims to take an old, shoddy system (“red”) and, bit by bit, turn it “green” – in practice, by replacing each old component in turn by a newer piece of code.

The logic behind this is that the system stays operational whilst developers add newer modules and services, with appropriate shims and interfaces so that the remaining legacy code thinks it’s still talking to its old buddies.

An example: you have an ancient web application which consists of a SQL database, a set of COM+ objects in VB6, and a Classic ASP front-end. Using RTG methodology, you could write a new version of each COM+ object using, say, C#, which exposed the same COM-visible methods as the old object. You could install the new object in COM+, and the website should, in theory, continue to work as normal – at the same time, you could code newer parts of the website in ASP.NET MVC and use the DLL directly (or maybe within a service). Alternatively, you could go ahead and rewrite some particularly grotty ASP pages in MVC, and create an interop library which talks to the COM+ objects but exposes a more pleasant API for ASP.NET to use.

Neat. What’s the problem?

It’s too good to be true.

Red-to-Green, like many bad ideas, sounds fantastic to management:

What’s that Jenkins? You say you can rebuild our software whilst still keeping the application running and delivering new features? You say that this new ‘green’ stuff will make development faster? By Jove, when can you start?!

This is bullshit. This is what actually happens:

  • Developers begin coding new modules in <modern language of choice>. Usually, the choice of modules will revolve around business needs – i.e. Customer X wants feature Y, so can we use this Red-to-Green stuff to deliver Y faster?
  • An amount of time is taken up with the basic plumbing – creating base classes, defining an interop layer (.NET to COM+, Ruby to PHP, views over an old SQL schema etc.) – but things will really start to move once this is done!
  • The developers start implementing feature Y. Instantly, their new architecture comes into conflict with the old – no-one told them (or remembered) that this batch VBA script needs to run in the background to copy files, or that the format of filenames is actually really important for this module
  • The new, green components now consist mostly of code which replicates, or compensates for, the eccentricities of the legacy code base. If not, the new code is now wholly reliant on calling methods in the old codebase, and is now tightly coupled with the legacy code.
  • Feature Y is delivered – probably late, but that’s OK, because the RTG pattern is established now, right?
  • Feature Z, which involves modifying a small part of the legacy system, needs to get information from Feature Y. The developers write an interface so the legacy codebase can talk to the new one. The green components are now coupled to the red, and vice versa.
  • Management begins to lose interest – there are paying customers out there, and Red-to-Green is taking too long and causing too much confusion. Who cares that the email formatting engine is a tangled mess of VB6? It works, so leave it alone! We’ll allocate some more time in the new year.

At this point, the green components are now just a different flavour of legacy code: they go to join the swamp of rotting code, more meatballs in the Great Spaghetti Monster.

When to Inject a Dependency

Let’s take a simple scenario: say that you’re developing a holiday request system (because the previous MS Access-cum-Excel-cum-Word jalopy that the boss wrote ten years ago has finally corrupted beyond the point of no return). You’re given the basic parameters, and then, if you’re like me, you start pseudo-coding:

class HolidayManager
{
  void RequestHoliday(Employee emp, DateTime start, DateTime end)
  {
    // TODO: sensible error messages and basic validation stuff
    if (start < DateTime.Now) throw new ArgumentOutOfRangeException();
    if (end < start) throw new ArgumentOutOfRangeException();

    // does the user have any holiday available?
    if (database.GetHolidaysRemaining(emp) == 0)
      throw new OutOfHolidayException("TODO: should we use a return code?");

    // assume that this call returns us a unique ID for this holiday
    var holidayId = database.AddHoliday(emp, start, end, HolidayStatus.Requested);

    // tell the user
    emailer.SendConfirmation(emp, start, end);

    // tell their line manager
    var manager = database.GetLineManager(emp);
    if (manager != null) // they might be the Big Boss
      emailer.SendRequestForApprovalEmail(emp, start, finish, holidayId);
    else
      this.ApproveHoliday(holidayId, HolidayStatus.Approved);
  }

  void ApproveHoliday(int id, HolidayStatus status)
  {
    // assume this struct has info about the holiday
    var holiday = database.GetHoliday(id);
    if (holiday.Status == HolidayStatus.Requested)
    {
        database.SetApproval(id, status);
        int daysLeft = database.CalculateRemainingDays(holiday.Employee);

        // send email to the requester (assume that holiday includes an email address)
        emailer.SendApprovalEmail(holiday, status, daysLeft);
    }
    else
      throw new InvalidOperationException("already approved");
  }
}

What we have here is a basic sketch of our business logic – we don’t care about how the database works, and we don’t care how emails go out, because that’s not important to us at this stage.

If, whilst pseudo-coding, you find yourself writing GetStuffFromDatabase or somesortofqueue.Push(msg), then congratulations – you’ve identified your dependencies!

Whenever you find yourself writing a quick note to cover up what you know will be a complicated process (retrieving data from a store, passing messages to a queue, interacting with a third party service, etc.), then this is an excellent candidate for a dependency.

Looking at our example, we’ve got two dependencies already:

  • database
  • emailer

Now, my point here is not to design the best holiday application imaginable – the code above is just a sketch. My point is that we’ve now identified some application boundaries, namely the boundary between business logic and services.

Let’s take this a little further: in our emailer implementation, we might have further dependencies. Say that our emailer implementation needs to format an email (given data and a template) and then send it. We might switch the formatter at some point in the future (using RazorEngine, NVelocity or just string.Format) and we might change the delivery mechanism (because the business now has a central email queue, and we need to submit messages to that instead of using SMTP directly). In this case, the business logic of the email engine is:

class HolidayEmailer {
  IEmailFormatter formatter;
  IEmailSender sender;
  IEmailLogger logger;

  void SendEmail(someObject email)
  {
    var message = new System.Net.MailMessage();
    message.To.Add(email.To);
    message.Subject = email.Subject;
    message.Body = formatter.Format(email);

    sender.Send(message);
    logger.Log(message);
  }
}

What this means is that we have many levels of DI – from the point of view of our holiday app, it has an abstraction that it can call to send emails, and we can unit test our object against a mock. We can also test our emailer implementation, because we’ve abstracted the formatting, sending and logging, which means that we can test what happens if any of these fail, or produce weird results.

Don’t outsource your business logic

Just another Monday morning, and Dwayne The Developer is ready to start cranking code. The business needs an object to make, track and approve holiday requests. Dwayne, being a conscientious sort, and having heard that Interfaces Are A Good Thing™, starts by writing:

interface IHolidays {
  // returns the holiday ID from the database
  int Request(Employee emp, DateTime start, DateTime finish);
  void Approve(Employee manager, int holidayId);
  void Reject(Employee manager, int holidayId, string reason);
  bool IsApproved(int holidayId);
}

Yay, Liskov and all that good stuff, right?

Wrong.

Sharon The Architect happens to be wandering by Dwayne’s desk, and sees what he’s writing. She asks why he’s made the business object an interface, and, not being satisfied with a few incoherent mumbles about SOLID, points out that Dwayne has just outsourced the business logic of this module, without gaining any of the benefits of dependency injection or inversion of control.

Interfaces are much like outsourcing: a software house might outsource non-core tasks such as designing the public website, internationalising the UI, designing icons or writing a custom SAP integration module. A sensible software house would seldom outsource its “secret sauce” – the core code that makes or breaks the app (I’ve seen some people do this, and it never ends well).

An interface essentially outsources a task: it says “here is a method that does X; I don’t care who does it or how, just do it and deliver me the result”.

There point is that there is no point in ever having a different implementation of IHolidays, because that object IS the application.

These things are good candidates for wrapping inside interfaces:

  • Database calls
  • SMTP and SMS APIs
  • Message queueing systems
  • File systems
  • Third-party APIs

Each of these is something with complicated internal logic that isn’t relevant to the business logic that your real objects are implementing – think of them as “stuff that does stuff and returns a result, where I don’t really know or care how that stuff happens”.

Because they aren’t part of the business logic, they’re excellent candidates for mocking during unit tests. We don’t need to test whether System.IO.File.Delete works; we just need to know what happens to our code when it does or doesn’t.

So what should our holiday object look like? Tune in next time to find out…

Why we’re all doing objects wrong

The object’s interface consists of a set of commands, each command performing a specific action. An object asks another object to perform an action by sending it a message.

From http://www.inf.ufsc.br/poo/smalltalk/ibm/tutorial/oop.html (emphasis mine)

It struck me recently that the vast majority of ‘objects’ that I’ve coded haven’t actually been objects, in the purest sense of the word. Take this as an example:

public class Survey
{
  public int Id { get; set; }
  public DateTime SurveyDate { get; set; }
  public ICollection<Surveyor> Surveyors { get; private set; }
  // etc
}

In a typical Enterprise Application, we code dozens – hundreds – of “classes” like these. For the most part, they replicate what’s in the database, and carry nary a whiff of that pagan idol, “Business Logic”. What logic they do have is generally in the order of:

public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName {
  get { return string.Concat(this.FirstName, " ", this.LastName); }
}

It’s not bad code, it’s not useless, but it’s not really an object – it’s a record.

The typical business application starts life as a database, or at least deals primarily with the input and output of data from a store. The data itself does very little other than trickle from the database into some sort of application code and into a UI. What logic it does have is usually UI logic (see the above example) – syntactic sugar to avoid repetition in the UI layer.

Occasionally, an object will encode validation rules, but these tend to be reflections of the database rules, and very rarely do they go beyond “field is required”.

The objects that we’re coding are the messages that the definition above speaks of – we code them, we build data access layers to load and save them, and we build UIs to display them.

So where does the real business logic lie – where are the real objects? In reality, I suspect that the vast majority is pushed into the database, into tangled stored procedures, patched and hacked and maybe, if you’re lucky, kept in source control. More logic tends to live in MVC controllers, the code-behind of WebForms pages, embedded deep in classic ASP or PHP pages, VB or WinForms, and on and on. Be honest: you’ve seen or written code like this before:

public ActionResult Create(Survey model)
{
  if (repository.Surveys.Any(s => s.Code == model.Code))
    ModelState.AddModelError("Code", "Code already exists");

  if (!ModelState.IsValid)
    return View(model);

  var property = repository.GetProperty(survey.PropertyId);
  property.DateOfLastSurvey = model.SurveyDate;
  property.Surveys.Add(model);

  repository.SaveChanges();

  emailer.SendEmail(property.ManagerEmail, "New survey entered", "blah blah blah");
  
  auditLog.RecordCreated(model);
  auditLog.RecordUpdated(property);

  return RedirectToAction("Created", model);
}

OK, so it’s MVC, but it could just as easily be a CreateSurveyButton_Click event handler, or create_survey.php or whatever – it’s not bad code, it’s not even particularly verbose, but it’s making up the business logic as it goes, making the assumption that the developer knows what the logic is, and that it’s so obvious that it doesn’t need to be encapsulated into an object.

(As an aside, this is why I have a problem with people who have a problem with “xxxManager” objects – you know what, SurveyManager is a perfectly good name for an object if you can’t think of a better one!)

Objects are Services

Look at the MVC example above – in reality, the object here is the MVC controller itself. Similarly, I’ve seen SOAP or REST services which could qualify. Have a read of Bob Martin’s post here: http://blog.cleancoder.com/uncle-bob/2014/09/19/MicroServicesAndJars.html

Martin makes the excellent point that an object is nothing if not a service – whether it’s implemented as a SOAP endpoint, a queue, a PHP script, or just an object in a DLL. You can write a beautiful service-oriented-architecture in a single-project console application, if you structure it correctly, and if you take the time to really think about what your objects are.

Your objects are not your data.

Your data are the messages passed to and from your objects.

On Elephant Traps

Elephant trap (software): ground that looks firm and takes one’s initial weight, but which subsequently drops out from under you, leaving you in a dark pit, staring up at the disapproving face of your project manager as they scatter the ashes of your estimates over you like sarcastic confetti.

The difference between real elephant traps and software ones is that the software traps aren’t intentional (and don’t actually trap elephants, which is Bad™). Instead, software traps are created by our desire to avoid risk, but in the process create massive unknown risks that far outweigh the initial risk we thought we were avoiding.

A real example from my career was when we decided to move from using GUIDs as primary keys to integers. This was a demonstrably Good Thing To Do, but it was also a huge change: our application logic lived primarily in stored procedures, and all of these took GUIDs as keys (not to mention our ADO.NET application code, web pages, etc). In our desire to avoid risk, we made a fateful decision: fudge it.

Where possible, we continued to use the GUIDs, and left the columns in place in the database. We created SQL functions to look up the real integer keys from GUIDs, and vice-versa, and peppered our stored procedures with these; meanwhile, the website kept passing GUIDs from page to page.

What we thought we were doing was avoiding risk:

change = risk ∴ fewer LoC changed = less risk

Unfortunately, this is false, and in fact is one of the main causes of risk in software. By being clever, we made the system less obvious – new developers had to get their heads around the fact that the keys passed via the website weren’t the actual database keys. This might have been OK, except that newer pages bypassed the GUIDs and just used integers, not to mention some pages that had to be modified to use integers to get better performance. Gradually, stored procedures evolved that could take either a GUID or an integer as a key – and would use one if the other was null (except when developers got confused, passed both in and the procedure crashed).

Smoke & Mirrors

smokemirrorsIf code is less obvious, it takes longer for a developer to fully understand it – and the result is that developers are more likely to inadvertently break stuff because they don’t realise that the layer of smoke-and-mirrors is pretending that it’s something else to satisfy the needs of an old bit of VB6. In effect, we’re disguising complexity, and thus we create elephant traps.

As I see it, this is a legacy of old, poorly-understood code. In such projects, the application is so poorly known that developers cannot be sure what the effects of making code changes will be. The natural instinct therefore is to make changes in such a way as to change the code as little as possible – in practise, this usually means pushing changes down to the database, because a) it’s the lowest level, and b) it can be changed live, so if it breaks it can be fixed without a build.

Change != Risk

The problem we often face is that we have no way of knowing what effects our changes will have, and so our natural instinct is to make fewer changes. This results in brittle software and strangles innovation. No-one is going to risk adding a cool new feature if they’re too scared to change the page layout because no-one understands how that custom JavaScript-generated CSS engine resizes page content.

We should make breaking changes as soon as possible: the sooner we break things, the sooner we know the scale of the problem we face, and the better we understand the system (breaking something is, despite what Gandalf said, a fantastic way of finding out what something is).

More Tests = Less Risk

The answer is tests. An application with a comprehensive suite of unit tests, integration tests, automated UI tests and a manual tester has a high degree of safety – a developer just has to run the unit tests to see whether they’ve broken anything fundamental. To get the best out of this, the application should have a short feedback cycle – if it takes a day to get it running locally, developers will make as few changes as possible. If it’s weeks between builds and integration tests, then people will start resenting fixing weeks-old mistakes because they’re now working on something else (and in the worst case, these bugs will become Business As Usual).

A well-run project with unit tests that can be run in seconds, continuous integration builds and tests on every push, and regular builds to testing with, at the very least, a five minute manual smoke test, is one where developers are not afraid to make changes.

Don’t be mean to elephants! They never forget…

On the art of Simplicity

It must use a distributed SOA architecture!

Really? Are you sure that a simple database-models-website solution wouldn’t work just as well for your 100 users, not to mention improving developer productivity by allowing them to run/debug it in one go?

Pro tip: if you factor your application well, such that all large components (e.g. your file storage layer, messaging sub-system, order processor, customer credit checker, etc) are represented by interfaces and passed as parameters, then splitting a ‘monolithic’ application into SOA components becomes quite a simple exercise.

We should think up-front about sharding the database!

Maybe. If the database is well-designed, sharding shouldn’t present a problem when it becomes necessary, but you might find that judicious use of de-normalisation, data warehousing and caching can take you a long way before you need to consider fragmenting the physical schema (millions of rows is not big data!).

We need to make the business rules configurable!

Maybe. See The Daily WTF for some thoughts on why configurability isn’t always a good idea.

I’ve seen so many instances where rules that are intrinsic were made configurable – I once saw a file storage library where users could ‘configure’ file extensions to map to different DLL handlers for thumbnails. Of course, only developers could write and deploy these modules, and the configuration was hard to script, so it became a manual step that had to be done after deployment of a new thumbnail handler.

Hard-coding a price list is probably a bad idea. Soft-coding a set of legal rules, on the other hand, can lead you into a world of hurt if the shape of those rules changes in the future. Where possible, factor the rules into a re-usable library and, if you need to make it configurable, you now only have one object to modify.

What about an Enterprise Service Bus?

No. Just no.

(You might need message queues for background processing, and no harm done. You might even decide on a common queuing engine – be it a library, service or just a pattern – but you soon as you start referring to it as the Enterprise Service Bus, it takes on a gruesome life of its own and summons Cthulhu)

So I was thinking that we should have a common repository pattern where we serialise business objects into DTOs—

What’s wrong with ADO.NET and SQL queries? Or you could get fancy and set up an Entity Framework data context, or the ORM of your choice. Unless you absolutely need some sort of API-based interface, why worry about serialisation (and, when you do need to, then JSON.Net does a brilliant job)?

By all means have a pattern for DALs/repositories, but for smaller projects/simpler databases you might be able to just have a single DAL representing your database.

Let’s build a SPA UI framework!

Let’s not. If you need a SPA, use Angular, Ember etc, but maybe you don’t need one yet – if your data set is small and your customer base isn’t massive, you can probably get away with vanilla ASP.NET MVC/Rails/Node.js/etc. Add Knockout.js or even just some simple jQuery/vanilla JavaScript on top for bits of rich UI. If this starts looking like spaghetti, then it’s time to think about going to the SPA and putting a framework in place.

Hell, I had to re-write a simple data entry system (PHP, 10 users, about a dozen screens, internal only) a couple of years back – I wrote it in ASP.NET WebForms (boom!) in a weekend using GridViews, ObjectDataSources and Membership controls.

But changing code after the fact is hard and messy – this is how systems become Big Balls of Mud!

Ye-es, to an extent. Certainly this happens if you do a half-assed job of it, and try to change piece-meal.

If your system is well-factored with clear separations between layers and components, then it should be simple to upgrade/refactor/change components. If you’re going to move from ASP.NET MVC to Ember.js, then make sure you do it wholesale – change the entire front-end UI and don’t call ‘done’ until it’s all done. If you’re going to move that order processing object into a web service, move the whole thing and delete the old implementation lest others try to use it.

The problem is not change – the problem is half-a-job changes where people think they’re playing it safe by leaving the old stuff there ‘just in case’; in reality, it’s much safer to break the system now rather than leave time-bombs in for the future.

Unit testing all subclasses

This post will demonstrate a simple technique using Visual Studio and NUnit for writing a single unit test that tests all current and future implementations of a base class.

Background

My current project has an abstract class representing an email message; the abstract base contains the basics like ‘to’, ‘CC’, ‘subject’ and so on, and each type of email that the app sends is represented by a concrete class inheriting from the base. In a separate class library, we have a template engine which takes an email object, loads an appropriate template, and uses the awesome RazorEngine to build the email HTML for sending.

For example, we have PasswordResetEmail, which sends users a new password if they forgot theirs; this has UserName and NewPassword properties to be injected into the template.

Email inheritance diagram

For simplicity, the templates are stored as embedded resources in the assembly, so this entire process is eminently unit testable.

Testing all templates

I want to test that the template engine can successfully process each email object; there are a dozen or so of these, and we can envisage another dozen or so being added. Initially, we tested a couple of templates and then called it a day, but recently we found that we had Razor compilation errors in some of the untested ones.

We could have copy/pasted tests for each template, but:

  1. that violates the DRY principle
  2. it’s easy to forget to test new templates
  3. frankly, it just looks messy

Instead, here’s what we wrote:

public static EmailBase[] EmailObjects
{
	get
	{
		var objects = new List();
		
		// get all inheriting types
		var types = typeof(EmailBase).Assembly
			.GetTypes()
			.Where(type => type.IsSubclassOf(typeof(EmailBase)));

		foreach (var type in types)
		{
			// get the default constructor
			var ctor = type.GetConstructor(new Type[] { });
			if (ctor != null)
			{
				objects.Add((EmailBase)ctor.Invoke(new object[] { }));
			}
		}

		return objects.ToArray();
	}
}

[Test]
[TestCaseSource("EmailObjects")]
public void RazorTemplateEngine_RenderHtmlTest(EmailBase model)
{
	ITemplateEngine target = new RazorTemplateEngine();
	string html = target.RenderHtml(model);
	Assert.That(HtmlIsValid(html)); // details excluded for brevity
}

Et voila! With a simple bit of reflection and NUnit’s TestCaseSource attribute, we can automatically pass in all classes which inherit from EmailBase and test that our template engine can render their HTML successfully! If we think of new tests to add, we can use the same technique to apply them to all EmailBase objects in the codebase.

Final thoughts

Unit testing (or TDD if you like) is hard. We’ve noticed a tendency to disregard the normal rules of DRY, SOLID etc when writing tests because they’re just test code – in reality, maintenance of tests is no less important than the maintenance of the code codebase (arguably more so). Always consider whether there’s a more elegant way of writing test code!

This post tests the theme. Nothing to see here.

Literally: I just want to see how the themes look with some actual content. I don’t have the energy to post anything interesting, as it’s work tomorrow and I’m still under the lingering influence of a stinking cold.

On the plus side, I’ve spent the weekend successfully laying a new border in the garden, whilst my laptop worked to migrate the remaining TFS-VC repositories to VSOnline + Git – which went surprisingly smoothly, and I’ll write up the details into a later post.

I have a few posts echoing around inside my head, which I’ll start posting over the next few weeks. In the meantime, here is a pretty picture!

White clematis