Dependency Injection with Model State Validation in ASP.NET Core

(caveat: this may or may not work in normal ASP.NET, I haven’t tried it yet)

“Validation” is a wonderfully ambiguous term. Usually, in simple examples, it’s restricted to ensuring that required fields are present, that dates are in the correct format, and so on. All of these only require access to the object being validated, which can be very easily done using data annotations:

using System;
using System.ComponentModel.DataAnnotations;

public class ValidateMe
{
  [Required]
  [StringLength(50)]
  public string ImARequiredField { get; set; }
}

You can take this a step further if, for example, you want to run rules on the data in the object by implementing IValidatableObject. An example:

using System;
using System.ComponentModel.DataAnnotations;

public class ValidateMe : IValidatableObject
{
  [StringLength(50)]
  public string ImARequiredField { get; set; }

  public bool TheFieldIsRequired { get; set; }

  public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
  {
    if (TheFieldIsRequired)
    {
      yield return new ValidationResult("The field is required", new string[] { nameof(ImARequiredField) });
    }
  }
}

Great! Now we can wire this up in our controller:

public IActionResult DoStuff(ValidateMe model)
{
  if (ModelState.IsValid)
  {
    // do stuff!
  }
  else
  {
    return View(model);
  }
}

Fantastic! But wait, what’s this? Oh dear, it’s the Real World(TM) come to burst our bubble.

Let’s take a more realistic example. Let’s say that we have a model that needs to set up a new user account. In our system, email addresses must be unique, and so we need to validate this before saving the user. We want to have a nice, clean, de-coupled architecture with injected dependencies, but where can we inject the dependencies into our validation system? Without them, we’d end up like this:

public IActionResult AddUser(AddUserModel model, [FromServices] IEmailValidator emailValidator)
{
  if (!emailValidator.IsEmailAvailable(model.Email))
  {
    ModelState.AddModelError("Email", "Email address is taken");
  }

  if (ModelState.IsValid)
  {
    // do stuff!
  }
  else
  {
    return View(model);
  }
}

Well, that’s ugly – it’s not terrible, but it’s bloating our controller, and, it turns out, we don’t need to do it at all, because enter ValidationContext!

public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
  // magic happens here!
  var emailValidator = validationContext.GetService(typeof(IEmailValidator));

  // now I has dependency, I can use ftw!
  if (!emailValidator.IsEmailAvailable(Email))
  {
    yield return new ValidationResult("Email address is taken", new string[] { nameof(Email) });
  }
}

Without doing anything special, ASP.NET is clever enough to pass its IServiceProvider into ValidationContext, and so we now have access to any dependencies that our application needs. The only slight downside is that the Validate method isn’t async, and so we’ll need to wrap any async calls in Task.Run(...) calls, but that’s a small price to pay for keeping our controllers slim.

SUNrise architecture – Part 1

I recently promised Twitter that I’d blog about the architecture of SUNrise.

Caveat: this won’t be an exhaustive document, partly for reasons of confidentiality, and partly because I’m writing this in the hour or so between putting my son to bed and the time where the tumbler of whisky next to me finally sends me to sleep.

A bit of background: SUNrise is an enterprise Graphics Lifecycle Management platform, written in-house at Sun Branding Solutions using (primarily) .NET and Microsoft Azure. It replaces our previous flagship product, ODIN, which dates from the early 2000s and runs on a combination of COM+, VB6 and Microsoft Project (yes, really). We began writing SUNrise in late 2013, although prototypes were kicking around since late 2012.

We were fortunate enough to start work on SUNrise just as Microsoft Azure was becoming an attractive platform. We began using Cloud Services, but have since moved to using App Services (formerly Websites).

technology_map
SUNrise technology map

Our core data platform is SQL Azure, although we are increasingly starting to add de-normalized lookup data into Azure Table Storage, and we maintain a separate search index using Azure Search. We maintain (and have had to use!) a read-only replica of our core database in another region, and, apart from failover in the event of SQL Azure outages (yes, they have happened), we use this as the data source for our ETL processes to avoid clogging up the transactional system.

sunrise-logical-architecture
Diagram showing the logical architecture of SUNrise

By using Azure Blob Storage, we can provide highly scalable binary storage for our clients without having to buy the storage up-front – we pay for what we use and factor this into our pricing model. Using the storage APIs means that we can simply keep pushing files into storage, without worrying about folder or file limits or path length issues – under the covers, we simply assign each binary file a GUID, and use that as the URI pattern.

SUNrise is a distributed application, and consists of several websites, hosting various parts of the logical application from the core customer-facing website, to the login screen or the integration API. Each of these are hosted as an App Service, backed by a hosting plan with a minimum of two instances. The diagram below shows how we use a mixture of hosting plans of various sizes, from a single large plan running a single site, to a smaller plan hosting a mixture of lesser-used or less-intensive applications. The joy of using Azure is that changing this layout is a matter of configuration rather than ordering new physical hardware.

hosting_plans
App service hosting plan layout

As well as customer-facing web applications, we run background processes on WebJobs, using a mixture of continuous and scheduled processes. By abstracting the plumbing of receiving queue messages, we have been able to write durable and reliable queue processors and task schedulers with very little code (the actual processes that they kick off are a different matter!).

Other technologies used include Redis Cache, SSIS, ApplicationInsights and Cloud Services.

The application is written in .NET 4.6, but we are actively porting our code to .NET Core 1.0 (a process that we’ll complete with the release of Visual Studio 2017). We aim to use mostly POCOs and don’t rely on any one framework; to that end, we’re gradually moving towards Dapper for data access, and away from Entity Framework.

We use a wide range of .NET technologies, from EF to WF, WCF, WebAPI, MVC and Razor.

The “secret sauce” of SUNrise is the workflow engine, a custom-built domain-specific language for modelling and running our customers’ business processes. For scalability, we run most of the intensive processing for this in a background WebJob using Storage Queues to pass messages between the application and processing tiers. By using a competing consumers model and idempotent/stateless messages, we can easily scale this system up by increasing the number of instances within the Hosting Plan, and in fact we do this automatically, so that higher volumes of messages in the queue will spin up more servers to handle them.

Development Environment

Our developers all run Visual Studio 2015 Enterprise Edition, and work on their own laptops against local databases. We use a combination of SSDT and FluentMigrator to model our data layer, the upshot of which is that developers can build a local copy of the database with a single command, so that each developer is working on their own segregated data island. Where certain services aren’t available locally (such as Azure Search), we emulate them using the next-best equivalent (in the case of Azure Search, we run ElasticSearch locally to provide a similar environment, and perform more detailed integration testing against a testing Azure subscription).

We all run SQL Server 2016 Developer Edition on our machines, as well as the Azure Storage Emulator, and various services such as Redis and Papercut (to emulate local SMTP).

We host our source code, CI builds, release and test automation on Visual Studio Team Services, but that’s a subject for another blog post!

Using PowerShell to update DotNet Core version numbers

The problem: I have a DotNet Core package that I would like to build and publish to an internal NuGet feed using Visual Studio Team Services. If I was using a .nuspec or .csproj file, I could set the version number of the package during the build, but DotNet Core doesn’t let me do this.

The solution: Write a very small PowerShell script that will update project.json with a new version number, then plug that into the build.

The script

param(
[string]$path,
[string]$version
)
$project = Get-Content $path | ConvertFrom-Json
$project.version = $version
ConvertTo-Json -InputObject $project -Depth 32 | Out-File $path

So, what’s going on here? Not a lot: we’re reading project.json into an object, then modifying the version property (if you do this from command-line, you even get intellisense for project.json!), and then finally re-serializing the object to over-write the original file.

Annoyingly, this will screw up the formatting of project.json, but you don’t actually care about this because this file will only live for the duration of the build, and will never get back into source control.

Plugging it into VSTS

I’ve added a build variable to store the desired version number:

Package.Version = 1.0.$(Build.BuildId)-*

Then I’ve added a new PowerShell build step into the build as the first step. I’ve saved my file as Set-DotNetVersion.ps1, so we set up the following arguments:

vsts_ps_setversionnumber

That’s it! If you then call dotnet pack on your library, it’ll create a NuGet package with the version number that you specified above. My full build pipeline is:

  • PowerShell Script
  • Dotnet restore
  • Dotnet pack
  • Publish build artifacts

Custom Authentication in ASP.NET MVC Core

ASP.NET Core has really good out-of-the-box support for authorization and authentication via ASP.NET Identity. However, sometimes that’s not enough, and you need to roll your own logic.

In my case, I was writing an API for sending emails, which many different clients would connect to. Each client would be given an “API Key” that would identify and authorize their requests. I could have implemented this using HTTP Basic authentication, or using OAuth tokens, but instead I thought it’d be good to get my hands dirty in the base classes.

So, the scenario is that a client sends a request with a known header value, as in:

$Headers = @{
    Authorization = "abcdefg1234567"
}

$Data = @{
    To = "you@example.com";
    From = "me@example.com";
    Subject = "Hello!";
    Body = "How you doing?"
}

Invoke-WebRequest -Uri "https://myservice.com/emails/send" -Headers $Headers -Data $Data

The actual value of the key and the logic behind it isn’t relevant to this article (I’m using RSA private keys under the hood), but the point is that we have a method in our code to verify the authentication key.

To do this, we need to implement custom authorization middleware. This will be composed of three parts:

  • The middleware class
  • The middleware options
  • The authentication handler

The middleware inherits from Microsoft.AspNetCore.Authentication.AuthenticationMiddleware<TOptions>, where TOptions inherits from Microsoft.AspNetCore.Builder.AuthenticationOptions. It’s responsible for building new instances of our authentication handler, and for injecting any required dependencies into the handler. For the purposes of this example, let’s suppose that I have an interface to validate my API keys, like so:

public interface IApiKeyValidator
{
    Task<bool> ValidateAsync(string apiKey);
}

Let’s assume that I’ve implemented this somehow (maybe using Azure KeyVault), so my challenge now is to hook my custom logic up to MVC, so that its built-in authentication can kick in. The end result is that I can decorate my controllers with:

[Authorize(ActiveAuthenticationSchemes = "apikey")]

You can think of the middleware as the glue that binds the MVC framework with our business logic. We could start by writing:

public class ApiKeyAuthenticationOptions : AuthenticationOptions
{
    public const string DefaultHeaderName = "Authorization";
    public string HeaderName { get; set; } = DefaultHeaderName;
}

public class ApiKeyAuthenticationMiddleware : AuthenticationMiddleware<ApiKeyAuthenticationOptions>
{
    private IApiKeyValidator _validator;
    public ApiKeyAuthenticationMiddleware(
       IApiKeyValidator validator,  // custom dependency
       RequestDelegate next,
       IOptions<ApiKeyAuthenticationOptions> options,
       ILoggerFactory loggerFactory,
       UrlEncoder encoder)
       : base(next, options, loggerFactory, encoder)
    {
        _validator = validator;
    }

    protected override AuthenticationHandler<ApiKeyAuthenticationOptions> CreateHandler()
    {
        return new ApiKeyAuthenticationHandler(_validator);
    }
}

This is all just glue code. The real meat lies in ApiKeyAuthenticationHandler, which can be stubbed out as follows:

public class ApiKeyAuthenticationHandler : AuthenticationHandler<ApiKeyAuthenticationOptions>
{
    private IApiKeyValidator _validator;
    public ApiKeyAuthenticationHandler(IApiKeyValidator validator)
    {
        _validator = validator;
    }

    protected override Task<AuthenticateResult> HandleAuthenticateAsync()
    {
        throw new NotImplementedException();
    }
}

As you can probably tell, our logic lives in the HandleAuthenticateAsync method. In here, we can basically do whatever we want – we can inject any dependency that the application knows about through the middleware, and the handler has access to the full request context. We could check the URL, querystring, form values, anything. For simplicity, here’s a really cut-down implementation:

StringValues headerValue;
if (!Context.Headers.TryGetValue(Options.HeaderName, out headerValue))
{
    return AuthenticateResult.Fail("Missing or malformed 'Authorization' header.");
}

var apiKey = headerValue.First();
if (!_validator.ValidateAsync(apiKey))
{
    return AuthenticateResult.Fail("Invalid API key.");
}

// success! Now we just need to create the auth ticket
var identity = new ClaimsIdentity("apikey"); // the name of our auth scheme
// you could add any custom claims here
var ticket = new AuthenticationTicket(new ClaimsPrincipal(identity), null, "apikey");
return AuthenticateResult.Success(ticket);

I’ve kept this example simple, but at this point you have to power to construct any sort of claims identity that you like, with as much or as little information as you need.

Finally, we just need to tell MVC about this middleware in Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    // the implementation of this is left to the reader's imagination
    services.AddTransient<IApiKeyValidator, MyApiKeyValidatorImpl>();
    // etc...
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    app.UseMiddleware<ApiKeyAuthenticationMiddleware>();
    // etc...
}

And that’s it! Custom business logic, dependency injection and MVC all working together beautifully.

Typescript interfaces for C# developers

Or, how not to make the mistakes I made with Typescript.

As a long-time .NET developer, I took the plunge into TypeScript a couple of years ago after much uhm-ing and ah-ing. I learned that you could write interfaces in TypeScript, and I immediately starting doing them wrong.

In .NET, an interface is something that you expect to be implemented in a class. For example:

public interface IFoo {
    string Bar { get; }
    string Baz(string bar);
}

public class FunkyFoo {
    public string Bar { get; set; }
    public void Baz(string bar) => $"Hello {bar}";
}

You might code against the interface, but you would only really create interfaces if you expected several different concrete classes to implement that contract.

When I started writing TypeScript, I wrote interfaces in much the same fashion:

interface IFoo {
    string Bar;
    Baz(bar: string);
}

class FunkyFoo {
    public Bar: string;
    public Baz(bar: string): string {
        return `Hello ${bar}`;
    }
}

The problem is that, while this works, it’s not really giving you the full benefit of TS interfaces. TypeScript, being a super-set of JavaScript, is a functional language – unlike .NET, the classes aren’t the most important thing in it. Using the approach above, I was writing code like:

let foo: Array<IFoo>;
$.getJSON("foo/bar", json => {
    for (let i = 0; i < json.length; i++) {
        foo.push(new FunkyFoo(json[i]));
    }
}

Ugh. Horrible and messy. If I’d used interfaces properly, I’d have something like this:

interface Foo {
  bar: string;
  baz: string;
}

// ...

let foo: Array<Foo>;
$.getJSON("foo/bar", json => {
    foo = json;
});

The realisation that dawned on me was that interfaces are just data contracts – nothing more. Due to the loose typing of JavaScript, I could quite happily say that an object was of type Foo, without doing any conversion.

So, what should I have been using interfaces for? Well, firstly, data contracts as seen above – basically just a way to define what data I expect to get from/send to the server. Secondly, for grouping sets of repeated parameters, as in:

function postComment(email: string, firstName: string, lastName: string, comment: string) {
  // etc
}

// becomes:
interface UserInfo {
    email: string;
    firstName: string;
    lastName: string;
}

// now the method is much cleaner,
// and we can re-use these parameters
function postComment(user: UserInfo, comment: string) {
  // etc
}

I’ve ended up thinking of interfaces as more akin to records in F# – just basic DTOs. Doing this has actually made my code better – now I have a clean separation between data and functionality, rather than mixing both in the same structure.

De-coupling after the fact

This post was prompted by Facebook’s recent decision, at the time of writing, to withdraw their Parse app platform. This is a prime example of why you should make all reasonable efforts to de-couple your application/business logic from your platform logic: it’s much more difficult to migrate a tightly-coupled application than a loosely-coupled one.

That said, I also came to the realisation that a well-written, DRY application should be relatively easy  to de-couple retrospectively. How? Let’s take some examples.

The code

Let’s take as our example a module that saves user files to disk against an entity in a SQL database (for example, storing blog post assets). Firstly, let’s take a naive, highly-coupled approach:

private const string RootPath = "\\\\server\\share";

public void SaveFile(int entityId, string fileName, Stream fileStream)
{
    // should produce: \\server\share\1234\file.txt
    var fullPath = Path.Combine(RootPath, entityId, fileName);
    if (File.Exists(fileName))
    {
        throw new Exception("File already exists!!");
    }
    using (var fs = FileStream.Create(fullPath))
    {
        fileStream.CopyTo(fs);
    }
}

public IEnumerable LoadFiles(int entityId)
{
    var fullPath = Path.Combine(RootPath, entityId);
    return Directory.GetFiles(fullPath);
}

public Stream LoadFile(int entityId, string fileName)
{
    var fullPath = Path.Combine(RootPath, entityId, fileName);
    if (File.Exists(fullPath))
    {
        return File.OpenRead(fullPath);
    }

    throw new FileNotFoundException("Can't find the file", fullPath);
}

Actually, this isn’t particularly highly-coupled. What we have is a module that deals with loading and saving files. In our application code, we might have methods like:

public ActionResult Download(int entityId, string fileName)
{
    try {
        var mgr = new FileManager();
        var data = mgr.LoadFile(entityId, fileName);
        return new FileStreamResult(data, "application/octet-stream");
    } catch (Exception ex) {
        return new HttpStatusCodeResult(500);
    }
}

In this method, we instantiate a new instance of our dependency, and then use that dependency to perform our application logic. If all of our application is written like the code above, we don’t really have much of a problem. Why? Think about it: all we actually have to do to turn this into a fully de-coupled application is:

  • Create an interface from our FileManager object
  • Create a new implementation using our new underlying platform (for example, using Amazon S3 storage, or WebDAV)
  • Replace all instances of our old class with the new one (if need be, changing any references from FileManager to IFileManager)

That’s just typing. It’s a night of find/replace and manually fixing the edge cases, but it’s not a nightmare – all of your platform logic is already encapsulated into an object, and that object can be replaced.

Where it all goes wrong

So what’s the problem? That happens when code doesn’t keep itself DRY. In the example above, the developers kept all of the logic around saving and loading files in a single object, and then used that object as a gateway for all file functions. By doing this, they made it very easy to change the behaviour of IO (there are many other benefits, such as making it easy to add logging and error handling).

In a depressingly large number of applications, this does not happen. What happens is that the developers know the common location for saving files, and so they either don’t write a centralised component, or, if they do, they don’t always use it. Instead, you see code like this:

public ActionResult Download(int entityId, string fileName)
{
    try {
        var path = $"\\\\server\\share\\{entityId}\\{fileName}";
        var data = File.OpenRead(path);
        return new FileStreamResult(data, "application/octet-stream");
    } catch (Exception ex) {
        return new HttpStatusCodeResult(500);
    }
}

The problem is that the above code is often repeated throughout the application – the developers found it easier to write the logic every time than to write, or use, a re-usable component.

When you have code like that above, de-coupling becomes more difficult, and can become totally impractical (especially if the logic is replicated across multiple applications in various languages – yes, this really does happen: I’ve supported an app where such logic was repeated across VB6 components, C# services, a classic ASP application and JavaScript).

Conclusion

If your code is DRY, then events like the Parse shutdown should be annoying rather than catastrophic. To quantify it:

  • Code is de-coupled and uses DI for dependencies
    = Write new dependencies, config change – late night and pizza
  • Code is DRY but doesn’t use DI
    = Write new dependencies, create interfaces, find/replace – long weekend or a few late nights
  • Logic is repeated through the app
    = Find new job

The disaster of OOP, and what we can salvage

Object Oriented Programming is a disaster, or at least that’s what Brian Will thinks, and I’m not about to disagree. I share his disillusionment with OOP, and I have always had a sneaking admiration for the PHP and JavaScript developers who just seem to get stuff done much faster than their static-typed object brethren.

However, one paragraph did stick out:

In [the] original vision, an object-oriented program is composed of a graph of objects, each an island of state unto itself. Other objects do not read or write the state of other objects directly but instead send messages.

Just because OOP as we write it today hasn’t produced the benefits we sought, doesn’t mean that there is no value in the original concepts. The difference between now and then is one of scale: how many of you out there are actually writing single programs, where a system is composed of objects that all live in a single app domain and communicate via messages?

I don’t think I’ve ever written such an app (it’s much harder in the web world in any case, where the app simply has no state from one request to the next). Where I have been writing actual, run-in-memory-UI-and-everything applications, they’ve been small utility apps that I’ve thrown together using RAD, that simply wrapped functionality that already existed in databases or services.

That, right there, is the part where OOP hasn’t failed – it has just, like the Roman Empire, transformed into something else. Instead of objects, think services – think how an enterprise system is composed of many disparate systems, services and databases, each of which are tied together by some form of message. In the worst case, they share state promiscuously and become an un-maintainable tangle, but in the best cases, we have services that sit serenely in the heavens and send messages back and forth to each other.

Or we could just hack it together with PHP and buy a bigger server.

Devops is just what you do in an SME

On reflection, I think I was fortunate to begin my career in a small organisation (80 employees with 2 IT techs and 4 developers at our biggest). I began my career converting Word documents into HTML in Dreamweaver to burn onto CDs for our clients. Then, I started building sites in Classic ASP and SQL Server, then I had to take on the role of SysAdmin. A year or so after that, we took on our first developer, and we became an ISV. A few months later, we had dedicated IT techs.

We never even knew there could be a split between dev and ops. We were developing the software, and we ran the servers. We scripted deployments (not entirely, but in large part), we wrote monitoring scripts, we set up production environments, we wrote scripts to replicate the production environment on our workstations.

We never reached the ideal of continuous, automated deployment but, on the other hand, we could (and did) make changes to internal and external systems with a few hours’ notice. The developers often spoke directly to clients and end-users, and we were often in a position to prototype change requests whilst the client was still in the meeting.

Moving from that environment to ones with a more traditional split between development and operations was an eye-opener, to say the least. Battling change control, ticket management and miscommunications taught me a whole raft of skills that I never even thought I’d need.

So if you want to hire people with a devops state of mind, recruit from SMEs – you might be pleasantly surprised.

Merging ModelState validation with Knockout models

Scenario: you have a server-side object in .NET, which you then serialize across to a client-side Knockout view model. Knockout supplies validation on the client-side, but you have one or two rules that you want to enforce on the server and only on the server, and for whatever reason you don’t want to run these asynchronously.

So let’s assume that you’ve got something like this:

public class LoginModel
{
    public string Username { get; set; }
    public string Password { get; set; }
}

This maps to a Knockout model which looks a little like:

function loginModel() {
  this.Username = ko.observable('').extend({ required: true });
  this.Password = ko.observable('').extend({ required: true });
}

All good. Let’s say that your controller action does this:

public IActionResult Login(LoginModel model)
{
  if (!CheckPassword(model.Username, Model.Password))
    ModelState.AddModelError("Password", "Invalid password");

  if (ModelState.IsValid)
  {
    // continue
  }

  return HttpBadRequest(ModelState);
}

Now, in your UI code, you can use the following snippet the map the server-side validation errors into your client-side model:

/**
 * Applies errors returned via the .NET model state error collection to a Knockout
 * view model by matching property names.
 * @param koModel Knockout view model to bind to
 * @param modelState Model state response (usually returned as JSON from an API call)
 */
function applyModelStateErrors(koModel, modelState) {
    // loop all properties of the `modelState` object
    for (var x in modelState) {
        if (modelState.hasOwnProperty(x)) {
            // try to get a property of our KO object with the same name
            var koProperty = koModel[x];

            // we're only interested in KO observable properties
            if (koProperty &amp;&amp; ko.isObservable(koProperty)) {
                var error = modelState[x];
                var message = "";

                // .NET returns errors as an array per-property, but we
                // can check the type just to be safe
                if (error instanceof Array) {
                    message = error.join(", ");
                } else if (typeof error == "string") {
                    message = error;
                } else {
                   message = error.toString();
                }

                // set the error state for this property
                koProperty.setError(message);
            }
        }
    }
}

Given this, you can catch the 400 error in your AJAX call, use the function above, and map the server-side errors into your local model.

Entity Framework dynamic sorting

I have a fairly common requirement to enable user-defined sorting, for example using a querystring like so:

?page=2&sort=name&desc=true

This isn’t really catered for using LINQ, due to type-safety (you need to use an expression based on a property of your object to do sorting). To enable dynamic sorting on an IQueryable object, try this:

private static readonly MethodInfo OrderByMethod =
    typeof(Queryable).GetMethods()
    .Where(method =&gt; method.Name == "OrderBy")
    .Where(method =&gt; method.GetParameters().Length == 2)
    .Single();

private static readonly MethodInfo OrderByDescendingMethod =
    typeof(Queryable).GetMethods()
    .Where(method =&gt; method.Name == "OrderByDescending")
    .Where(method =&gt; method.GetParameters().Length == 2)
    .Single();

public static IQueryable DynamicOrderBy(this IQueryable source, string sortBy, bool sortDescending = false)
{
    if (string.IsNullOrWhiteSpace(sortBy))
        return source;

    if (sortBy.Contains(','))
    {
        foreach (var sort in sortBy.Split(','))
        {
            source = source.DynamicOrderBy(sort, sortDescending);
        }

        return source;
    }
    else
    {
        var type = typeof(TSource);
        var parameter = Expression.Parameter(type, sortBy);
        var propertyRef = Expression.Property(parameter, sortBy);
        var lambda = Expression.Lambda(propertyRef, new[] { parameter });

        MethodInfo genericMethod = (sortDescending ? OrderByDescendingMethod : OrderByMethod)
            .MakeGenericMethod(new[] { typeof(TSource), propertyRef.Type });
        object ret = genericMethod.Invoke(null, new object[] { source, lambda });

        return (IQueryable)ret;
    }
}

Putting this in your code allows you to write something like this:

IQueryable data = GetData();
var sortedByNameDescending = data.DynamicOrderBy("Name", true);