ASP.NET MVC applications: from classic example to modern-day architecture

It is already more than ten years ago since Microsoft released ASP.NET MVC as an alternative to ASP.NET Webforms[i]. Originally intended to make the transition from desktop windows to web applications easier for developers, Webforms with its viewstates and events was often seen as a forced, non-web way of building web applications which hided the true nature of web development.

In 2007 after years of community debate ASP.NET MVC was announced as the new way for Microsoft’s web application development[ii]. However the MVC design pattern from which it derives its name is one of the oldest software design patterns around, dating back to Smalltalk in the late 1970s and early 1980s.

The Model-View-Controller (MVC) pattern

The MVC pattern can be used where data needs to be presented in some form to a user or external system. This is usually by presenting some marked-up presentation on a screen, although other forms (like JSON output of a Web API service) can be seen as presentation as well. The MVC pattern emerged as result of the object-oriented principle of separation of concerns:

  • The Model, containing data to be presented and possibly notifies the view of state changes.
  • The View, presentation logic for the data possibly containing markup and condition scripts as well as providing ways to have user input sent back to the controller.
  • The Controller, sending the view to the rendering device and handling (user) communications from the view to the model and the rest of the application.

Diagrams vary a bit but usually the MVC pattern is something like this:

Although the pattern as shown in the diagram is still valid, in the one decade of existence of ASP.NET MVC the world changed dramatically. Applications became much more complex and distributed with big data, microservices, links to various external systems, security and privacy demands and mobile and cloud-based platforms. Modern business and project delivery methods like Scrum and DevOps demanding flexible and highly testable solutions. It means the three parts of the pattern tend to become too complex and SOLID[iii] principles demand more separation of logic.

Classic ASP.NET MVC implementation

In this article I assume the reader is familiar with at least the basics of ASP.NET MVC. When creating an ASP.NET MVC application in a .NET development environment, the classic MVC structure is quite apparent in the folder structure of the (Visual Studio) project:


Project
  |- Controllers
  |- Models
  |- Views

There will be some more folders for various things but the MVC structure is clearly visible. The Views folder will contain the razor views (.cshtml files)  in folders according to naming convention, and the Models folder will actually contain ViewModels, but in simple (CRUD[iv]) applications and examples quite often the model classes in here would reflect their data source directly.

In the traditional way the controller classes often contained one or more GET methods, possibly with a parameter to get a specific model instance to return to a view, and some POST methods through which a model instance could be created or updated. Quite often the controller connected with the datasource directly, creating some connection context either within the methods or on initialization of the controller class. So generally a controller could have a code structure like this:


public class ProductController : Controller
{
    SomeDbContext db=new SomeDbContext();   // The database context
      
    Public ProductController()
    {
        // Code to initialize further the database context if needed
    } 

    public ActionResult Index()
    {
        // …
        // Use the database context to get a list of products and create a list of “ProductModel” items named products to return to the “Index.cshtml” view
        // …

        return View(products);
    }

    public ActionResult Products(int id)
    {
        // …
        // If id is 0 redirect to Index, otherwise get the product with the specific Id from the database context and return to the “Product.cshtml” view
        // …

        return View(product);
    }

    [HttpPost]
    Public ActionResult Create(Product product)
    {
        //…
        // Code to create a new product in the database
        // …
        return RedirectToAction(“Index”);
    }

    [HttpPost]
    Public ActionResult Update(Product product)
    {
        //…
        // Code to update a product in the database
        // …
        return RedirectToAction(“Index”);
    }
}

This is pretty much a controller that can handle giving an overview of products and do CRUD operations on products. If the logic between retrieving data and sending it to the view gets more complex it can be delegated to private functions on the controller or separate business logic classes.

Model classes could be generated and the data accessed from a database using a framework like LINQ2SQL. In the traditional object-oriented way, classes should encapsulate data and functionality, so any functions could be added directly on the class or using the “partial” class construct Microsoft had invented to deal with extending auto-generated classes [v].

In many tutorials and older examples this is the general setup shown for an ASP.NET MVC application, and for a simple CRUD applications this can still be fine.

ASP.NET MVC in modern software

Nowadays software tends to be much more complex than the traditional 3-tier approach of web applications which mainly consists of the application’s data layer (often a relational database), business logic layer and presentation layer. The above approach has several disadvantages:

  • The use of the database context as a private field in the controller causes a tight coupling between the two, making (unit) testing more complex and time consuming.
  • When complexity increases, the controller classes lose focus and violate more the SOLID and DRY principles of good object oriented design. The purpose of a controller class should be mediating between view,  (user) interaction and the underlying application.
  • Using data models in the view can give unnecessary or even unwanted access to data fields and/ or functions.
  • Models can get complex with added functions and dependencies. They are not focused on their primary role, which is holding and transferring data within and between systems.
  • With the increase of data volumes and distribution of data in many locations, it is desirable to keep data requests and data transfers (and therefore data models) as compact as possible since sending redundant data over networks and the internet can decrease performance dramatically.
  • Quite often modern software systems need to incorporate and communicate with third party components and services, for example federated authentication systems and payments providers. Usually developers have no control on how and in what format these third party systems deliver their data, causing a need for transformations and extra checks in the application.
  • Business demands and DevOps practices require fast and frequent updates of software parts. Therefore the less dependencies between components and classes, the better.

Removing the controller dependencies on datasource contexts

If we want to create automated (unit) tests, the first problem to overcome is the tight coupling between the controller and the database context. This can be done by either using reflection or some other bypass to replace the database context with a mock object on test initialization, or by using a real database for testing.

Especially the second option causes a lot of overhead for initialization before and cleanup after each test. On top of that the communication with the database will severely slow down unit tests which can be unacceptable in a DevOps environment. The first option will not always be possible since the data source may already require configuration or a valid connection when the controller instance is created.

Data for a controller can come from multiple sources and may need structuring, filtering or transforming. There is a tendency towards using web service and REST protocols for communication with data sources because of distribution and scalability. A more general term “repository” has emerged to indicate the various forms of data storage and services.

In MVC applications, we create a “Repositories” folder and in there a repository class for each(!) data source. In our example we can create a class “SqlDbRepository” to where we move the SomeDbContext and any logic involving it’s initialization and data manipulation.

Since we are implementing repository classes ourselves, we have full control on how and where the datasources are initialized and approached. We also cleaned up our controllers by moving code related to context initialization and data handling to the repository classes. By creating interfaces for the repository classes and use them in the controller, we have made our controller independent of a datasource context or client implementation and we can finds ways to create mock implementations in unit tests without much effort.

Using the right models in the right place

More complex software means quite often a view needs to combine data from various sources. Therefore models used by views may differ greatly from the models used for retrieving and transferring data. For example think of an invoice view which may need to combine data coming from a CRM system, financial system and a postal code checking system. On top of that we may have little control on format and content of the data delivered so we may need to perform transformations before using.

So the model classes in our MVC Models folder should be viewmodels tailored to the views that will use them, and the data for them should be transferred from datamodels or business logic models specific for incoming data transfers or processes. Quite often I see code for these transfers spread across a project in controllers or on model classes themselves, usually looking something like this:


var personViewModel= new PersonViewModel(PersonData persondata)
{
    Firstname=personData.Firstname,
    Lastname=personData.Prefix + personData.Surname,
    BirthDate=PersonData.BirthDate,
    Address=personData.Street + personData.HouseNumber,
    …
}

From experience I know programming the logic for this can be tedious and time consuming.

It makes sense to delegate the operations for this data mapping to separate mapper classes, and put these in a separate “Mappers” folder in our project structure. Tools like Automapper (https://automapper.org/) can be a great help to reduce the code that needs to be written for this. However in high performance applications the hardcoded approach may still be favourable since these mappers can come with a small performance hit.

It is recommended to create a mapper class per target type (viewmodel). This class gets (usually static) operations taking one or more source objects. For naming convention give the class a name “MapToTargetType”, and implement operations as From(SourceType source).

A small side note: although it would make sense to keep different types of models in different folders (i.e. “ViewModels”, “DataModels”, I haven’t seen this much on real projects yet. The folder “Models” that is generated by default somehow tends to end up the place for all model classes in a project.

Constants and enumerations

Although constants and enumerations can be defined anywhere, the danger of having them in random places is developers can overlook them when they need them, and create duplicate definitions in a project. I’ve seen quite a few projects where constants and enumerations where defined in several places in the code base. Quite often the duplicates tend to differ slightly from each other, introducing bugs in the system when code is altered.

Therefore it is not a bad practice to keep these in a separate folder named “Constants” in the root structure. Then when code is altered or added and a developer needs a constant or enumeration, it is quite easy to look if it has already been defined.

Orchestrating the parts

So we have data coming in from multiple sources through our repositories and perform data mapping to our viewmodels through mapper classes. Maybe we need to do some checking or validation or other extra work. If we need to combine data in our viewmodel from different data sources we cannot do this in a repository class since these classes need to be dependent on one repository each.

This can still result in quite some complex code or unwanted dependencies in our controllers. For this I tend to create specialized service classes, where I put in this logic. Although there is no naming convention for them I usually call them “ServiceClass” preceded by the model or controller type name (i.e. “ProductServiceClass” ). References to (interfaces for) repositories are moved to these service classes, and a controller just gets a reference to a service class and calls a method on it to retrieve the viewmodel. All logic to create a controller’s viewmodel and which transcends the scope of repositories, mappers or other classes is placed in the service class.

If logic in a service class gets complex design principles (SOLID, DRY) can require a more complex structure. In that case the service class may be a façade pattern using other business logic classes in the system.

Using service classes also helps reducing code duplication in case multiple controllers use (part of) the same data and repositories. Complex logic that applies to one model class is moved from the model class to the service class too, so we get clean models with little overhead or clutter.

The new controller code

By now we should have a controller with little code in each operation, like below:


public class ProductController : Controller
{
    IProductServiceClass service=new ProductServiceClass ();       

    public ActionResult Index()
    {
        List products=service.GetList();
        return View(products);
    }

    public ActionResult Products(int id)
    {
        if(id==null)
        {
            return RedirectToAction(“Index”);
        }
        Product product=service.Get (id);
        return View(product);
    }

    [HttpPost]
    Public ActionResult Create(Product product)
    {
        bool success=service.Create (product);
        // Here can go logic to deal with failed creation
        return RedirectToAction(“Index”);
    }

    [HttpPost]
    Public ActionResult Update(Product product)
    {   
        bool success=service.Update (product);
        // Here can go logic to deal with failed creation
        return RedirectToAction(“Index”);
    }
}

As you can see we now have a pretty clean controller which only contains code to deal with the interaction between view, user and the underlying system. Any controller will look like this making it fast and easy to create new views and controllers. By using generics and derive the serviceclasses from generic interfaces it is possible to make a base controller class containing the above logic, and create specific controllers by just deriving from this base class with the specific types for viewmodel and service.

Using dependency injection to remove hardcoded dependencies

Although we made our code quite SOLID and DRY with the above we still have a hardcoded dependency with a service class, and through that with the underlying repositories and other classes. Controllers in ASP.NET MVC are instanced by default from the .NET MVC framework on requests, and therefore we need a way to insert a dependency on runtime.

The last several years we have seen the rise of so-called Inversion of Control (IoC) en Dependency Injection (DI) patterns in ASP.NET MVC applications. .NET Core has native libraries for this in the Microsoft.Extensions.DependencyInjection namespace. For standard ASP.NET MVC there are several third-party frameworks that implement these patterns like Ninject, Autofac and Unity.

Basically these all work the same. By adding one of these frameworks to our ASP.NET MVC application, we get the possibility to pass a dependency through a controller’s constructor instead of making a hardcoded reference in the controller. So instead of:


public class ProductController : Controller
{
    IProductServiceClass service=new ProductServiceClass (); 

    …   
}

We can do:


public class ProductController : Controller
{
    IProductServiceClass service;

    Public ProductController(IProductServiceClass injectedService)
    {
        Service= injectedService;
    }

    …   
}

Since the controller now doesn’t have hardcoded instancing of the service class, we can do the same again in our service class: instead of hardcoded creating instances of repositories, we can pass them on in the constructor of our service class by means of interface parameters. The DI framework takes care of passing on the concrete implementations and calling the right constructor.

At this point it is important to know about lifetime scope in DI frameworks. Usually you can choose from three different implementation scopes: singleton, scoped and transient. Singleton means only one instance of an implementation is created and passed on to every injected parameter of the specified interface everywhere. The difference between transient and scoped can be made clear with the above example: in case of transient, the service class will get its own instance of an injected parameter while in case of scoped it will get the same instance as the calling controller. This distinction is important if we need to share field values or states present in the injected objects.

Of course the DI framework needs to know in advance which implementation to link to which interface. This is done by registering them on application initialization and keep them in a context called a IoC container. Generally the application’s startup code calls a configuration method on a class like RegisterDependencies which contains code like:


services.AddTransient<IOperationTransient, Operation>();

In a .NET Core application this configuration will typically be done within or called from ConfigureServices(..) in Startup.cs.

By using dependency injection we have removed the dependencies of classes and components on each other, making it much easier to swap out or change individual components of a software system. Quite often it is possible to also control the scope and lifetime of instances through the IoC container (i.e. operations like AddScoped and AddSingleton). Also the benefits for (unit) testing are clear: we can easily create mock- or alternative implementations in a testing environment by implementing the parameter interfaces and pass them to the constructors.

The new ASP.NET MVC application structure

So for our ASP.NET MVC application, after the above our folder structure could look like this:


Project
  |- Constants
  |- Controllers
  |- Mappers
  |- Models
       |- ViewModels
       |- DataModels
  |- Repositories
  |- ServiceClasses
  |- Views

We will have added some DI framework and a class like RegisterDependencies in our project root.

Of course the above is not a single perfect solution for every project, but in my opinion a modern ASP.NET MVC application is much more than just some models, controllers and views. Too often I see projects with code all over the place, quite often with random “helper” classes in cases where developers ran into issues with too complex code or redundancy. Hopefully this article helps.

 

Sources:
[i] https://www.dotnettricks.com/learn/mvc/a-brief-history-of-aspnet-mvc-framework
[ii] https://weblogs.asp.net/scottgu/asp-net-mvc-framework
[iii] https://en.wikipedia.org/wiki/SOLID
[iv] CRUD: Create, Retrieve, Update, Delete. A term derived from the four standard data manipulation operations on records in a database.
[v] https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/partial-classes-and-methods

Leave a comment