SUNrise architecture – Part 1

I recently promised Twitter that I’d blog about the architecture of SUNrise.

Caveat: this won’t be an exhaustive document, partly for reasons of confidentiality, and partly because I’m writing this in the hour or so between putting my son to bed and the time where the tumbler of whisky next to me finally sends me to sleep.

A bit of background: SUNrise is an enterprise Graphics Lifecycle Management platform, written in-house at Sun Branding Solutions using (primarily) .NET and Microsoft Azure. It replaces our previous flagship product, ODIN, which dates from the early 2000s and runs on a combination of COM+, VB6 and Microsoft Project (yes, really). We began writing SUNrise in late 2013, although prototypes were kicking around since late 2012.

We were fortunate enough to start work on SUNrise just as Microsoft Azure was becoming an attractive platform. We began using Cloud Services, but have since moved to using App Services (formerly Websites).

technology_map
SUNrise technology map

Our core data platform is SQL Azure, although we are increasingly starting to add de-normalized lookup data into Azure Table Storage, and we maintain a separate search index using Azure Search. We maintain (and have had to use!) a read-only replica of our core database in another region, and, apart from failover in the event of SQL Azure outages (yes, they have happened), we use this as the data source for our ETL processes to avoid clogging up the transactional system.

sunrise-logical-architecture
Diagram showing the logical architecture of SUNrise

By using Azure Blob Storage, we can provide highly scalable binary storage for our clients without having to buy the storage up-front – we pay for what we use and factor this into our pricing model. Using the storage APIs means that we can simply keep pushing files into storage, without worrying about folder or file limits or path length issues – under the covers, we simply assign each binary file a GUID, and use that as the URI pattern.

SUNrise is a distributed application, and consists of several websites, hosting various parts of the logical application from the core customer-facing website, to the login screen or the integration API. Each of these are hosted as an App Service, backed by a hosting plan with a minimum of two instances. The diagram below shows how we use a mixture of hosting plans of various sizes, from a single large plan running a single site, to a smaller plan hosting a mixture of lesser-used or less-intensive applications. The joy of using Azure is that changing this layout is a matter of configuration rather than ordering new physical hardware.

hosting_plans
App service hosting plan layout

As well as customer-facing web applications, we run background processes on WebJobs, using a mixture of continuous and scheduled processes. By abstracting the plumbing of receiving queue messages, we have been able to write durable and reliable queue processors and task schedulers with very little code (the actual processes that they kick off are a different matter!).

Other technologies used include Redis Cache, SSIS, ApplicationInsights and Cloud Services.

The application is written in .NET 4.6, but we are actively porting our code to .NET Core 1.0 (a process that we’ll complete with the release of Visual Studio 2017). We aim to use mostly POCOs and don’t rely on any one framework; to that end, we’re gradually moving towards Dapper for data access, and away from Entity Framework.

We use a wide range of .NET technologies, from EF to WF, WCF, WebAPI, MVC and Razor.

The “secret sauce” of SUNrise is the workflow engine, a custom-built domain-specific language for modelling and running our customers’ business processes. For scalability, we run most of the intensive processing for this in a background WebJob using Storage Queues to pass messages between the application and processing tiers. By using a competing consumers model and idempotent/stateless messages, we can easily scale this system up by increasing the number of instances within the Hosting Plan, and in fact we do this automatically, so that higher volumes of messages in the queue will spin up more servers to handle them.

Development Environment

Our developers all run Visual Studio 2015 Enterprise Edition, and work on their own laptops against local databases. We use a combination of SSDT and FluentMigrator to model our data layer, the upshot of which is that developers can build a local copy of the database with a single command, so that each developer is working on their own segregated data island. Where certain services aren’t available locally (such as Azure Search), we emulate them using the next-best equivalent (in the case of Azure Search, we run ElasticSearch locally to provide a similar environment, and perform more detailed integration testing against a testing Azure subscription).

We all run SQL Server 2016 Developer Edition on our machines, as well as the Azure Storage Emulator, and various services such as Redis and Papercut (to emulate local SMTP).

We host our source code, CI builds, release and test automation on Visual Studio Team Services, but that’s a subject for another blog post!