Managing parent and child collection relationships in Entity Framework – What is an identifying relationship anyway?

Some of the posts on this site are more as a reminder to myself about some problem that I have encountered and the solution I have found for that problem. This is one such post. The issue addressed below is fairly common and has been described and solved more comprehensively elsewhere but I have added this so I can find it again in the future should the need arise. If it proves helpful to anyone else then all the better. 

It is often desirable to have a parent entity that has a collection of child entities. For example a Order entity may have a collection of OrderLine entities, or a User entity may have a collection of Role entities. These example may look similar but they have a subtle difference that may cause some behavioural issues in some circumstances.

In the first example the order lines belong to an order and it doesn’t make sense for them to exist on their own. In the second example a Role can exist independently from the User.

This is the difference between an identifying and non-identifying relationship. Another example from Stack Overflow:

A book belongs to an owner, and an owner can own multiple books. But the book can exist also without the owner and it can change the owner. The relationship between a book and an owner is a non identifying relationship.

A book however is written by an author, and the author could have written multiple books. But the book needs to be written by an author it cannot exist without an author. Therefore the relationship between the book and the author is an identifying relationship.

Whatever the type of relationship it would be nice to be able to just add and remove child items to the parent collection and let EF worry about all the bothersome adding and removing from the Database.

Adding to the collection works correctly in both cases and if you have a non identifying relationship (your child foreign key is nullable) you will also be fine when removing items from the collection.

If you have only implied an identifying relationship (your child primary key is on the table Id field and your child foreign key is non nullable) you are likely to be in for a world of pain trying to remove items from the child collection.

When you remove an item from a child collection with the above kind of relationship Entity Framework tries to update the child entity foreign key to null. This causes problems as the foreign key is non-nullable and you get this System.InvalidOperationException when the context saves:

The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted.

The way to stop this from happening is to make the identifying relationship explicit by specifying the foreign key field of the child to be part of its primary key. I know that in the ORM world composite keys are not really in vogue, but this make things explicit to Entity Framework that the child cannot exist without its parent and so should be deleted when it is removed from the parents collection.

How do I specify a composite key?

To set the composite keys simply annotate the entity (or use the fluent API as part of OnModelCreating in the Context). If you have an OrderLine entity that has an identifying relationship with an Order entity the OrderLine entity should be annotated like this:

public class OrderLine
{
    [KeyColumn(Order = 0), DatabaseGenerated(DatabaseGeneratedOption.Identity)] 
    public int Id { getset; }

    public string Description { getset; }

    public Decimal Cost { getset; }

    [KeyForeignKey("Order"), Column(Order = 1)]
    public int PupilId { getset; }

    public virtual Order Order { getset; }
}

Behavioural testing in .Net with SpecFlow and Selenium (Part 1)

In this series of posts I am going to run through using Selenium and SpecFlow to test .Net applications via the user interface. I will talk about the situations this type if testing can be used, and how we can create effective, non-brittle test with the help of the Page Objects Pattern.

The series in full:

I have consciously avoided using the term Behaviour Driven Development (BDD) in the title as this post isn’t really about BDD as such. It is about using some tools in a way that could be used in many situations, with BDD being just one of them. I want to discuss how to use the BDD Framework SpecFlow and the web automation framework Selenium to create tests that test an application via it’s web user interface.

When to use this approach

You could use this approach in several situations. In any situation the idea is to create an executable specification and create automated testing for your code base. The two most common situations are:

  1. Automated acceptance tests – lets call this approach ‘test afterwards’. There is often a desire to create acceptance tests (or integration tests or smoke tests or whatever…) for an application. Using SpecFlow and Selenium you can create tests that test the application as a user would actually interact with it. A suite of acceptance tests is a worthwhile addition to any application.
  2. Behaviour Driven Development style – This is the ‘test before’ approach. In Test Driven Development with MVC I talked about how a true BDD style would ideally test user interaction with the application. These tools allow you to do just that. The workflow would be very similar to the outside-in approach I describe in the post, only you would create scenarios in SpecFlow instead of writing unit tests. Of course there are tons of approaches when it comes to BDD, this being  just one of them. Liz Keogh has a useful summary and introduction to BDD that is worth a look if you want to know more about it.

In this post I am going to use a test afterwards approach so that I can focus on the tooling.

Installing SpecFlow and Selenium

I have created a new folder to house my acceptance tests. Installing Selenium is easy, simply grab the latest version from NuGet (you are using NuGet, right?). Make sure to select Selenium WebDriver. For SpecFlow grab the latest version from NuGet and then get the Visual Studio integration from the Visual Studio Gallery. This will add some extra VS tooling we will use later.

The first scenario

I am going to revisit the event search scenario’s from Test Driven Development with MVC and expand on them a little. The first thing is to add a SpecFlow feature file to the project. Right click on the project and select New item.. Add a new SpecFlow feature file. I have called mine EventSearch.feature. The feature file has feature,which is essentially a user story, and a number of scenarios in the Given When Them format.

The feature files are written in the Gherkin language, which is intended to be a human readable DSL for describing business behaviour.

Lets start populating the feature file with the first scenario.

Feature: EventSearch
	In order to find an event to participate in
	As a potential event delegate
	I want to be able to search for an event

Scenario: Event Search Criteria should be displayed
	Given that I want to search for an event by region
	When I select to choose a region
	Then a list of possible regions is presented
	And the list should contain "All", "North", "South" and "London"
	And "All" should be the default value

The next step is to create the bindings that allows us to wire up the Gherkin feature to our application’s code. Before we do that we just need to tell SpecFlow to use Visual Studio’s testing framework MSTest instead of the default NUnit. This is done by adding a line  of configuration to your app.config, you will find the SpecFlow section already exists:

<specFlow>
<!-- For additional details on SpecFlow configuration options see http://go.specflow.org/doc-config -->

    <unitTestProvider name="MsTest.2010"/>
</specFlow>

As a brief aside, you will notice that the feature file has a code behind .cs file that has been created by SpecFlow. If you have a look inside you will see that the code is a rather awkward looking MSTest test class. It is used to run the Scenario’s that are described in the feature file and implemented in the binding step definitions we are about to create. It is not pretty, but it is not intended to be read or edited by a user as it is generated by SpecFlow when the feature file is saved. SpecFlow will generate the correct type of testing hooks for the test framework you are using.

Create the step definitions

The step definitons are where we add code to perform the tests. The step defintions are defined in a file with a [Binding] attribute. When we run the feature the test runner run the test defined in the feature code-behind which is bound to these steps by SpecFlow. The easiest way to create these bindings is to use SpecFlows Visual Studio tooling. Right click on the scenario in the feature file and select “Generate step definitions”. This brings up a dialogue box that allows you to create the necessary bindings to wire up the test. I created the bindings in a new class called EventSearchSteps. The generated bindings look like this:

using System;
using TechTalk.SpecFlow;

namespace CrazyEvents.AcceptanceTests
{
    [Binding]
    public class EventSearchSteps
    {
        [Given(@"that I want to search for an event by region")]
        public void GivenThatIWantToSearchForAnEventByRegion()
        {
            ScenarioContext.Current.Pending();
        }

        [When(@"a list of possible regions is presented")]
        public void ThenAListOfPossibleRegionsIsPresented()
        {
            ScenarioContext.Current.Pending();
        }

        [Then(@"the list should contain ""(.*)"", ""(.*)"", ""(.*)"" and ""(.*)""")]
        public void ThenTheListShouldContainAnd(string p0, string p1, string p2, string p3)
        {
            ScenarioContext.Current.Pending();
        }

        [Then(@"""(.*)"" should be the default value")]
        public void ThenShouldBeTheDefaultValue(string p0)
        {
            ScenarioContext.Current.Pending();
        }
    }
}

The step definitions have been created with a pending implementation ready for us to complete. You will see that some parts of the features have generated steps that contains attributes with what looks like regular expressions, and that the methods have parameters. This is a feature of SpecFlow that allows for reuse of step definitions when the only thing that changes is a string value. We will see how this is used later. SpecFlow has lots of useful features like this, I would advise checking out the documentation to learn more about it.

Lets run the test and see what happens.

Resharper’s test runner shows that the test was inconclusive. This is not really a surprise as the test implementation is pending. Of course you can use Visual Studio’s built in test runner instead.

In the next post we will have a look at how we can implement the bindings with Selenium to automate browser interactions.

The series in full:

RESTful Web Services with Windows Communication Foundation

In this post I am going to investigate how it is possible to create RESTful web services with Windows Communication Foundation. As well as discussing how the services are created I am also going to dig a bit deeper into some of the issues that can arise with this type of approach, especially the Same Origin Policy for browser based service execution, and how we can work around it.

What is REST?

I don’t really want to go too deep into REpresentational State Transfer, as there are lots of excellent resources already written that will do a far better job of explaining it, however a brief recap of the basics can’t hurt.

REST is an architectural pattern for accessing resources via HTTP. A resource can represent any type of item a client application may wish to access. A client can be a user or could be a machine calling a web service to get some data.

If you were creating a traditional RPC style service to return the details of a product you may well have created something that is called like this:

http://superproducts.com/products/getProduct?id=12345

In this example we have a verb (get) and a noun (Product). A RESTful version of this service would be something along the lines of:

http://superproducts.com/products/12345

In this example there is no explicit verb usage. Instead it is the HTTP verb GET that is used. In a RESTful web service it is the HTTP verbs (GET, POST, PUT, DELETE) that are used to interact with the resource.

GET – Used to return a resource in. It may be a simple product, as in the example above or it could be a collection of products.
POST – Used to create a new resource.
PUT – Used to update a resource. Sometimes PUT is also used to create a new resource is the resource is not present to update.
DELETE – Used to delete a resource.

The operations of a RESTful web service should be idempotent where appropriate to build resilience into the operations. It also makes sense that some verbs are not applicable to certain resources, depending on whether the resource is a collection of resources or a single resource (as we will see later).

REST is not a new idea – it was introduced by Roy Fielding in his 2000 doctoral dissataion, but has only recently been gaining serious traction against more widespread web service styles such as SOAP based XML. REST’s lightweight style and potential for smaller payloads have seen its popularity increase and major vendors have sought to add support for REST services into their products, including Microsoft.

A RESTful web service prescribes to the principles of REST, as well as specifying the MIME type of the data payload and the operations supported. REST does not specify the type of payload to be used and could really be any data format. In practice JSON has emerged as the defacto standard for RESTful services due to it’s lightweight nature.

Creating a WCF REST service

The WCF service will provide product information for some generic widgets.

The RESTful api for the service will support the following operations:

/Products

GET – Return a collection of products
POST – Add a new product to the product collection
PUT – Not supported, as we would want to edit the contents of the collection of products, not the collection itself.
DELETE – Not supported, as it would not really make sense to delete an entire collection of products

/Products/{productId} e.g. /products/12345

GET – Get the product represented by the product Id
POST – Not supported, as we cannot add anything at an individual product level
PUT – Update the product details.
DELETE – Delete the product

I am going to move failry quickly when it comes to creating the service. If you are not familiar with WCF (or need a refresher) I recommend that you have a look at the earlier series on creating WCF services.

I am going to make use of some of the new .Net 4.0 WCF features and not use a .svc file for the service. When creating RESTful services this has the advantage of making the URL look a little more ‘RESTful’, so instead of http://website/products.svc/1we can have http://website/products/1.

Lets create a new WCF application solution. I have thrown away the .svc file that the wizard has created for us. Rename the Interface to something sensible. I have called mine ProductService. Remember that this internal naming of the service contract and its operations in not exposed to the service once it is up and running. It makes sense to name the contract and operation in a sensible way for the project and them map it to a more REST friendly URL. We will see how to do this later in the post.

A RESTful service is in many respects similar to any other. It still follows the ABC of WCF. The service has an address at which the operations are invoked, it has a binding that specifies the way the service will communicate, and it also has a Contract that define the operations. The Contract for the ProductService looks like this:

[ServiceContract]
public interface IProductService
{
    [OperationContract]
    [WebInvoke(Method = "GET", UriTemplate = "", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
    List<Product> GetProducts();

    [OperationContract]
    [WebInvoke(Method = "POST", UriTemplate = "", RequestFormat = WebMessageFormat.Json)]
    void CreateProduct(Product product);

    [OperationContract]
    [WebInvoke(Method = "GET", UriTemplate = "/{ProductId}", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
    Product GetProductById(string productId);

    [OperationContract]
    [WebInvoke(Method = "PUT", UriTemplate = "/{ProductId}", RequestFormat = WebMessageFormat.Json)]
    Product UpdateProductDetails(string productId);

    [OperationContract]
    [WebInvoke(Method = "DELETE", UriTemplate = "/{ProductId}", RequestFormat = WebMessageFormat.Json)]
    Product DeleteProduct(string productId);
}

It is the WebInvoke attribute that identifies that a service operation is a available via REST. The method property defines the HTTP verb that is to be used. The UriTemplate property allows a templated URL to be specified for the operation. Parameters for the URL are specified between {}. The RequestFormat and ResponseFormat attributes specify the datatype for the exchange. We are using JSON.

As we are not using .svc files we must supply some routing so WCF knows how to route the requests to the service. Open up the Global.asax (or create one if the project doesn’t already have one) and locate the Application_Start method and add in a new route.

protected void Application_Start(object sender, EventArgs e)
{
    RouteTable.Routes.Add(new ServiceRoute("Products"new WebServiceHostFactory(), typeof(ProductService))); 
}

This route tells WCF that when a request comes in for Products it is routed to the ProductService. As an aside if you were using custom service host factory, for example if you were using an IOC container to to inject dependencies, you could specify it here instead of the standard WebServiceHostFactory.

It is the URI template and the routing that allows us to provide the RESTful web service addresses. Although we have a service operation called GetProductById that takes a Product Id, the resource is accessed via a URL in the form of /Products/12345.

Now the implementation is out of the way we are ready to go. The actual implementation of the service operations is not very interesting as it has no bearing in the REST service, as it is just the same as any implementation of a WCF service contract interface.

“But wait” I hear you all shout, “you haven’t configured the service in the web.Config. You are correct. In this example I am going to use another feature of .Net 4.0 WCF: Default Configuration. This allows WCF to automatically create  default configuration for Bindings, including endpoint, behaviors and protocol mappings without the need to get your hands dirty in the Web.Config. Of course in a production environment it is likely that you would want more control over your configuration and this can be added to the config as required.

Lets try the service

In order to test the service we need to call the service. A simple test harness is required. I created a simple html page using  jQuery to call out to the service:

<!DOCTYPE html>
<html>
<head>
<title>WCF RESTful Web Service Example</title>
</head>
<script type="text/javascript" src="http://code.jquery.com/jquery-1.7.2.min.js"></script>
    <script type="text/javascript">

        var GetProductRequest = function () {
            $.ajax({
                type: "GET",
                url: "http://localhost:52845/products/",
                contentType: "application/json",
                dataType: "text",
                success: function (response) {
                    $("#getProductsResponse").text(response);
                    alert(response);
                },
                error: function (error) {
                    console.log("ERROR:", error);
                },
                complete: function () {
                }
            });
        };

        GetProductRequest();

    </script>

    <body>
        <h1>
        WCF RESTful Web Service Example
        </h1>
        <h2>
        GET Response
        </h2>
        <pre id="getProductsResponse"></pre>
    </body>
</html>

This simply uses jQuery’s ajax functionality to make a call to the service and output the response on the page. Lets run it. As another aside, if you are using ASP.NET Development Server when building and testing your services (and the chances are you probably are) then you will only be able to test GET and PUT methods, as others are not allowed and will return a 405 error. If you want to be able to test the PUT and DELETE methods I would recommend using IIS Express as your development web server.

I am using Chrome 21 and there appears to be a problem. If we look at the developer tools in Chrome (press F12) we can see that a request was made with the OPTIONS method, and it returned a status of 405 Method not allowed.  Also there is a message about the Origin nor being allowed by something called Access-Control-Allow-Origin. This is due to the Same origin policy.

The Same origin policy

The Same origin policy is a browser security concept that  prevent a client-side Web application running from one site (origin) from obtaining data retrieved from another site. This pertains mainly to cross site script requests. For them to be considered to be from the same origin they must have the same domain name, application layer protocol and port number.

Althouth the same origin policy is an important security concept with a lot of valid uses it does not help out legitimate situation. We need to be able to access the RESTful web service from a different origin.

Cross-origin resource sharing

Cross-origin resource sharing is a way of working with the restrictions that the same origin policy applies. In brief, the browser can use a Prefligh request (this is the OPTIONS method request we have already seen) to find out whether a resource is willing to accept a cross origin request from a given origin. The requests and responses are validated to stop man in the middle type attacks. If everything is okay the original request can be performed. Of course there is more to it than that, but that is all we need to know for now. More infomation is available in the
W3C spec for CORS.

As usual with browsers, CORS support is not universal, nor is it implemented in consistently across all browsers. In some browsers it is all but ignored (I am looking at you IE 8/9) We will need to ensure that the CORS support we add to the WCF service works in all situation. ‘When can I use Cross-Origin Resource Sharing‘  is a handy tool to see if your browser suppports CORS.

Dealing with an OPTION Preflight request in WCF

There are a couple of ways of dealing with CORS in WCF.

The first is some custom WCF Behaviours that can be applied to a service operation via an attribute. This approach has the advantage that it is possible to explicitly target which operations you want to be able to accept the cross domain requests, however it has quite an involved implementation.  A good overview can be found here.

I am going to focus on the second method, which is to intercept the incoming requests and check for the presence of the OPTIONS method and responding appropriately. I will show the code and them we can talk about how it works. Back to the Global.asax and the Application_BeginRequest method.

protected void Application_BeginRequest(object sender, EventArgs e)
{
    HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin""*");
    if (HttpContext.Current.Request.HttpMethod == "OPTIONS")
    {
        HttpContext.Current.Response.AddHeader("Cache-Control""no-cache");
        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Methods""GET, POST, PUT, DELETE");
        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Headers""Content-Type, Accept");
        HttpContext.Current.Response.AddHeader("Access-Control-Max-Age""1728000");
        HttpContext.Current.Response.End();
    }
}

As the method name suggests, this is called when a request is received. The first action is to add a Access-Control-Allow-Origin header to the response. This is used to specify URLs that can access the resource. If you are feeling generous and want any site (or origin) to access the resource you can use the * wildcard. The reason this header is outside of the check for the OPTIONS method is because sometimes CORS request do not send OPTIONS Prefligh requests for ‘simple’ requests (GET and POST). In this case they just send their origin and expect a response to be told if they can access the resource. This is dependent upon Browser CORS implementation. Remember when we first tried to call the service and we got an error saying that the Origin was not allowed by Access-control-Allow-Origin? That was because the response header did not contain this header with the correct entry.

The Cache-Control tell the requester not to cache the resource. The Access-Control-Allow-Methods specifies which http methods are allowed when accessing the resource. Access-Control-Allow-Headers specifies which headers can be used to make a request. Access-Control-Max-Age specifies how long the results of the Preflight response can be cached in the Preflight result cache.

There is a lot more information available about the CORS implementation. If you are looking for more information two of the best places are the W3C CORS Specification and the Mozilla Developer Network.

Lets give it another try.

Success. That’s all there is to it. Hopefully you will see that creating RESTful web services with WCF is easy. If you have any questions please feel free to leave a comment.

Filtering a CAML Query by Target Audience


This post is not really my usual area of expertise, but hopefully it might be useful to someone who is trying to do something along the same lines.

Recently I had the opportunity to work on a Silverlight control to be displayed on a SharePoint Intranet site. The control was pulling data from a SharePoint list via a Collaborative Application Markup Language (CAML) query and then filtering the results to select only the items with the required Target Audiences. This approach had some performance problems as it was necessary to return a large number of results to ensure that the required number of filtered results was contained within the result set.

So, for example, it would have to return 100 results to filter it down to 10 results for the required target audience. Needless to say this is a not a great idea. A better approach would be filter the list on the server as part of the CAML query and then return only the results we were interested in.

So, just how would you go about adding some audience targeting filters to a CAML query, where there could be one or more possible target audiences?

The query would effectively become (in SQL like syntax):

SELECT * FROM LIST WHERE TargetAudiences IN (AudienceList)

However, SharePoint lists don’t really allow this type of query. For an item in a list the audiences are a string of audience Guids, separated by either commas or semi-colons (this article explains why this is the case) so the IN bit cannot be used. The query becomes something like

SELECT * FROM LIST WHERE TargetAudiences LIKE %AudienceGuid1% OR TargetAudiences LIKE %AudienceGuid2% OR TargetAudiences LIKE %AudienceGuid3% ... etc

CAML does not have a LIKE, but it does have a Contains which is close enough. So all I have to do is build up the a query string that is valid, and do it such a way that it can work for 1 audience, for 20 audiences of however many.

I am just going to come out and say it. The CAML logical join (AND / OR) operators are weird. The operators have to be applied in pairs (maybe in homage to Noah’s Ark?)

A single OR with 2 conditions:

<Or>
    Condition1
    Condition2
</Or>

3 conditions:

<Or>
    <Or>
        Condition1
        Condition2
    </Or>
    Condition3			
</Or>

This can lead to some spectacularly ugly nested conditions, that are both hard to write and hard to read. You know when a languages syntax is bad when there are lots of query building and converting tools available to give you a fighting chance to use it correctly.

I needed to generate a query that could have an arbitrary number of Contains statements OR’d together from a list of Audience guids provided in a List (it doesn’t really matter where the List came from).

A contains query looks like this:

<Contains>
    <FieldRef Name='Target_x0020_Audiences'/>
        <Value Type='TargetTo'>11111111-1111-1111-1111-111111111111</Value>
</Contains>

A contains query like this is not ideal, as its performance is poor due to the string matching it has to do. I am not sure that there is any other way to do this that would perform better. If there is let me know in the comments, I would love to know.

If we wanted to just look for a single audience, the query above would be fine, but I needed to look for lots of audiences and get the results that matched at least one of them. To create the query, a small utility class was in order. The basic idea was to create the Contains syntax for each Guid and push it onto a queue, then take the top 2 items and OR them together before pushing the result back on the queue. Repeat this until there was a single item in the queue, which is the query that we want.

So, if A, B C etc. represent a Contains condition, and the () represent the OR tags, the process for five audiences looks like this:

----> Head
E D C B A --> (AB) E D C --> (CD) (AB) E --> ((AB)E) (CD) --> ((CD)((AB)E))

The code from the utility class:

public static string CreateTargetAudienceCriteria(List<string> audienceList)
{
    var audienceQueue = new Queue<string>();

    audienceList.ForEach(audience => audienceQueue.Enqueue(BuildContains(audience)));

    while (audienceQueue.Count > 1)
    {
        audienceQueue.Enqueue(String.Format("{0}{1}{2}{3}""<Or>\r\n", audienceQueue.Dequeue(), audienceQueue.Dequeue(), "</Or>\r\n"));
    }

    return audienceQueue.Dequeue();
}

private static string BuildContains(string audience)
{
    return String.Format("<Contains>\r\n<FieldRef Name='Target_x0020_Audiences'/><Value Type='TargetTo'>{0}</Value>\r\n</Contains>\r\n", audience);
}

And it would be simply used within a CAML query as shown below. It is likely that your query would include more in the Where clause:

var camlQuery = new CamlQuery
{
	ViewXml = string.Format(
		@"<View>  
			<Query> 
				<Where>
					{0}                        
				 </Where>
			</Query> 
		</View>",
	TargetAudience.CreateTargetAudienceCriteria(audienceList))
};

Test Driven Development with ASP.Net MVC (Part 5)

The series in full:

In Part 4 of the series we implemented the first controller action of for the Event Controller in a Test Driven manner. As we are working from the outside-in it means that although the Controller action has been implemented, the layers beneath the Controller (the service layer and the repository in our case) are not yet implemented beyond creating anything needed by the layers above.

As the controller needed to call the service it was implemented in part 4, but is far from complete.

public class EventService : IEventService
{   
    public GetAllRegionsResponse GetAllRegions()
    {
        throw new NotImplementedException();
    }
}

The next step in the outside-in process is to build up the implementation of the service method guided by tests. The functionality for this service method is really simple so it shouldn’t take long to get it up and running. First we should Create a new Test Class to hold out service tests. I have separated out the service test into a separate namespace to aid navigation and readability of the test. I have also named the test class in such a way that when read in conjunction with the test name it gives a meaningful description of the intent of the test.

So what is the intent of the test for the GetAllRegions() service method? We would expect the service method to get all the regions so that they can be presented to the user in by the Event Controller. In this example we are going to have a repositories for data persistence. So this method is going to simply ask the repository for a list of regions, and then package them up to be returned to the Controller. Lets write the test.

public class EventServiceShould
{
    private EventService eventService;
    private Mock<IEventRepository> eventRepository;

    [TestMethod]
    public void ReturnAListOfAllTheRegions()
    {
        //Arrange
        eventRepository = new Mock<IEventRepository>();
        eventService = new EventService(eventRepository.Object);

        var regions = new List<Region>
                            {
                                new Region {Id = 1, Name = "North"},
                                new Region {Id = 2, Name = "South"},
                                new Region {Id = 3, Name = "London"}
                            };

        eventRepository.Setup(x => x.GetAllRegions()).Returns(regions);

        //Act
        var result = eventService.GetAllRegions();

        //Assert
        Assert.AreEqual(3,result.Regions.Count);
        Assert.AreEqual("South", result.Regions[1].Name);
    }
}

As before Resharper is telling us that there is a lot of implementation missing. Lets start with the Repository Interface. I have created another project for the repositories:

namespace CrazyEvents.Repositories
{
    public interface IEventRepository
    {
        List<Region> GetAllRegions();
    }
}

The Repository should always work with domain entities, and ours is no exception. Next up is the Region domain entity. Again I have created a new project to hold the domain items.

namespace CrazyEvents.Domain
{
    public class Region
    {
        public int Id { getset; }
        public string Name { getset; }
    }
}

And finally the service method implementation. This method has no logic, as we simply want to return all the regions. The only thing we need to do is to to take the regions returned from the repository and build the DTO’s for the Controller.

public class EventService : IEventService
{
    private readonly IEventRepository eventRepositoty;

    public EventService(IEventRepository eventRepositoty)
    {
        this.eventRepositoty = eventRepositoty;
    }

    public GetAllRegionsResponse GetAllRegions()
    {
        var regions = eventRepositoty.GetAllRegions();

        var regionDtos = regions.Select(region => new RegionDto {Id = region.Id, Name = region.Name}).ToList();

        return new GetAllRegionsResponse {Regions = regionDtos};
    }
}

Now the implementation is complete lets run the tests and make sure everything is okay.

All the tests are green which means that everything is going well. The implementation for the scenario is almost complete. Working from the outside in means that the only thing left to do at this point is the repository implementation.

As the repository will need to perform the data persistence (usually by talking to a database) it is not possible to isolate the implementation for unit testing in the same way as the controller or the service and is not generally part of an outside-in TDD approach. That is not to say it should not be tested. It would be prudent to create an integration test for the scenario including talking to a real database. Better yet, would be to use the scenario to create a BDD style test with a framework such as SpecFlow.

Once the repository method is complete, it is time to move onto the next slice of functionality and follow the same outside-in process.

The series in full:

Windows Communication Foundation – Resolving MVC client WCF ChannelFactory dependencies with Unity

This series of posts will talk about Windows Communication Foundation, starting with an introduction into the technology and how to use it in you project, moving onto more advanced topics such as using an IOC container to satisfy dependencies in your services.
Posts in this series:

In the previous post we looked at how it was possible to use ChannelFactory to consume a WCF service from an MVC controller without using a proxy generated from the service WSDL.

The View action of the ProductController looked like this:

public ActionResult View(int productId)
{
    var factory = new ChannelFactory<IProductService>("WSHttpBinding_IProductService");
    var wcfClientChannel = factory.CreateChannel();
    var result = wcfClientChannel.GetProductById(productId);

    var model = new ViewProductModel 
        { 
            Id = result.Id, 
            Name = result.Name, 
            Description = result.Description 
        };

    return View(model);
}

This approach is less than ideal. The ChannelFactory creation is a little awkward and would be littered throughout the controller code creating duplication.

It would be better if we could just use Unity to inject a dependency to the WCF service in the Controller constructor like this;

private readonly IProductService productService;

public ProductController(IProductService productService)
{
    this.productService = productService;
}

public ActionResult View(int productId)
{
    var result = productService.GetProductById(productId);

    var model = new ViewProductModel 
        { 
            Id = result.Id, 
            Name = result.Name, 
            Description = result.Description 
        };

    return View(model);
}

If we tried to run this MVC would not be able to create the controller as there is no longer a parameterless constructor. Since version 3 MVC has introduced a mechanism for using IoC dependency resolution to resolve Controller dependencies through IDependencyResolver. Simply implement the interface and register using DependencyResolver.SetResolver.

If your needs are simple, then there are very simple implementations that will serve you well, however if you want to resolve dependencies that implement IDisposable then you need to be a bit more careful, as simple implementations often do not cater for IDisposable at all.

I am going to recommend a NuGet package to save us from implementing IDependencyResolver as I want to focus on how we can resolve WCF service dependencies. Unity.MVC3  is a nice little package that makes setting up Unity with MVC very easy. Although it was built for MVC3, version 4 uses the same mechanism and the package continues to function in the correct manner. You can read more about how the package functions here.

Install the package and open Bootstrapper.cs. Inside there is a BuildUnityContainer() section, this is where we need to register the dependencies with Unity. But when we ask for an IProductService what should we return? We need a new WCF channel for the IProductService based on the endpoint settings we defined in the web.config in the previous post. The code to do this is:

private static IUnityContainer BuildUnityContainer()
{
    var container = new UnityContainer();

    container.RegisterType<IProductService>(
        new ContainerControlledLifetimeManager(),
        new InjectionFactory(
            (c) => new ChannelFactory<IProductService>("WSHttpBinding_IProductService").CreateChannel()));          

    return container;
}

So what does this actually do? The first parameter is the type of lifetime manager we require. A Unity lifetime manager describes how dependencies are created and destroyed and the lifetime those dependencies have. In this case a ContainerControlledLifeTimeManager means that the same instance is returned for each call (also known as Singleton instance). This is desirable for creating WCF channels  as it is a comparatively expensive operation so we only want to create the channel once for the lifetime of the Unity container. There are several other Unity Lifetime Managers that are useful in different circumstances.

The next parameter introduces a factory function that tells Unity how to create the required concrete type. In our case we are going to create a new WCF Channel using ChannelFactory for the named client endpoint we set up in the web.config.

The only thing we need to do is to call Bootstrapper.Initialise() from Application_Start() in Global.asax and we are good to go. The controller is now completely decoupled from the WCF ChannelFactory, leaving a much cleaner implementation.

Posts in this series:

Windows Communication Foundation – Resolving WCF service dependencies with Unity

This series of posts will talk about Windows Communication Foundation, starting with an introduction into the technology and how to use it in you project, moving onto more advanced topics such as using an IOC container to satisfy dependencies in your services.
Posts in this series:

In this post I am going to use Microsoft Unity Inversion of Control container to resolve the dependencies for the ProductService implementation.

Most developers will already be familiar with the concept of Dependency Injection and will probably have used an IOC container such as Unity or Castle.Windsor at some point. In order to use Unity to resolve the dependencies in a service implementation a few steps must be taken to wire up Unity with WCF. This post will walk throught the steps to do this.

Adding Unity and creating the DI Wrapper

I have created a new project to hold all the DI related stuff. You may wish to put it somewhere else, which is fine. Into that project I have a added Unity via NuGet. The fisrt thing to do is to create a static wrapper for the Unity Container so we can be sure we are getting the same Unity container whenever we need to use it.

using Microsoft.Practices.Unity;

namespace WcfServiceApplication.DependencyInjection
{
    public static class DIWrapper
    {
        private static readonly IUnityContainer _container;

        static DIWrapper()
        {
           _container = new UnityContainer();   
        }

        public static IUnityContainer Container
        {
            get { return _container; }
        }
    }
}

The DIWrapper provided a static getter returning the UnityContainer. As it is a static class the constructor is guaranteed to run only once and ensures that a new UnityContainer is created. This same container will be returned whenever a call to DIWrapper.Container is made.

UnityServiceHostFactory

A service host does exactly as the name would suggest. A service such as the currect Prodcut Service can be hosted on the standard service host without any issues. Once we want to use Unity (or any other IOC container) we need to extend the ServiceHost functionality in order to resolve the dependencies in the service implementation. WCF uses a factory pattern to provide a layer of indirection between the hosting environment and the concrete type of the service required. If you don’t specify which ServiceHostFactory to use, you will get the default factory that returns an instance of the default ServiceHost.

To specify a ServiceHostFactory for a service it needs to be specified in the service .svc file like so:

<%@ ServiceHost 
    Language="C#" Debug="true" 
    Service="WcfServiceApplication.Implementation.ProductService" 
    Factory="WcfServiceApplication.DependencyInjection.WCF.UnityServiceHostFactory"
%>

If you do not specify a factory in the .svc file you will encounter the following error when trying to use the service.

“InvalidOperationException: The service type provided could not be loaded as a service because it does not have a default (parameter-less) constructor. To fix the problem, add a default constructor to the type, or pass an instance of the type to the host.”

This is because the default behavior for WCF is to call the parameteless constructor, but if you are using constructor injection for the dependency the service implementation will not have one, so WCF does not know how to instantiate the service.

Obviously this factory does not exist so we had better create it. To do so we extend System.ServiceModel.Activation.ServiceHostFactory.

using System;
using System.ServiceModel;
using System.ServiceModel.Activation;

namespace WcfServiceApplication.DependencyInjection.WCF
{
    public class UnityServiceHostFactory : ServiceHostFactory
    {
        protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
        {
            return new UnityServiceHost(serviceType, baseAddresses);
        }
    }
}

All the factory needs to do is return a new instance of ServiceHost, so we just override CreateServiceHost to return our yet to be created new ServiceHost

UnityServiceHost

The new ServiceHost functionality is again very straight forward. System.ServiceModel is extended and a single method is overridden.

using System;
using System.ServiceModel;
using Microsoft.Practices.Unity;

namespace WcfServiceApplication.DependencyInjection.WCF
{
    public class UnityServiceHost : ServiceHost
    {
        public UnityServiceHost(Type serviceType, Uri[] baseAddresses)
            : base(serviceType, baseAddresses) { }

        public IUnityContainer Container
        {
            get { return DIWrapper.Container; }
        }

        protected override void OnOpening()
        {
            new UnityServiceBehavior(Container).AddToHost(this);
            base.OnOpening();
        }
    }
}

Inside the overridden OnOpening method a new UnityServiceBehavior is created (which we will create next) and added to the UnityServiceHost before the OnOpening method on the base ServiceHost is called.

UnityServiceBehavior

The System.ServiceModel.Description.IServiceBehavior interface provides a mechanism to extend or modify some aspect of an entire service. To use Unity with WCF we need to add some custom endpoint behaviour.

using System.Collections.ObjectModel;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Description;
using System.ServiceModel.Dispatcher;
using Microsoft.Practices.Unity;

namespace WcfServiceApplication.DependencyInjection.WCF
{
    public class UnityServiceBehavior : IServiceBehavior
    {
        private ServiceHost serviceHost;

        public UnityServiceBehavior()
        {
            InstanceProvider = new UnityInstanceProvider();
        }

        public UnityServiceBehavior(IUnityContainer unity)
        {
            InstanceProvider = new UnityInstanceProvider();
            InstanceProvider.Container = unity;
        }

        public UnityInstanceProvider InstanceProvider { getset; }

        public void AddBindingParameters(
                ServiceDescription serviceDescription,
                ServiceHostBase serviceHostBase,
                Collection<ServiceEndpoint> endpoints,
                BindingParameterCollection bindingParameters) { }

        public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
        {
            foreach (var cdb in serviceHostBase.ChannelDispatchers)
            {
                var cd = cdb as ChannelDispatcher;
                if (cd != null)
                {
                    foreach (var ed in cd.Endpoints)
                    {
                        InstanceProvider.ServiceType = serviceDescription.ServiceType;
                        ed.DispatchRuntime.InstanceProvider = InstanceProvider;
                    }
                }
            }
        }

        public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) { }

        public void AddToHost(ServiceHost host)
        {
            if (serviceHost != null)
            {
                return;
            }

            host.Description.Behaviors.Add(this);
            serviceHost = host;
        }
    }
}

ApplyDispatchBehavior can be used to change run-time property values or insert custom extension objects such as error handlers, message or parameter interceptors, security extensions, and other custom extension objects. In the UnityServiceBehavior class ApplyDispatchBehavior is used to assign a custom InstanceProvider to the service endpoints.The InstanceProvider in this case is our own UnityInstanceProvider (the implementation of which is shown below).It is the InstanceProvider that will actually use Unity’s funcitonality to resolve the dependencies in the service impementation.

The behaviour of the Validate and AddBindingParameters methods does not need to change and so have empty implementations.

UnityInstanceProvider

This implementation of IInstanceProvider allow control over the instantiation of the service class. Our aim is for Unity to resolve the service type (ProductService in this example) and its dependencies.

using System;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Dispatcher;
using Microsoft.Practices.Unity;

namespace WcfServiceApplication.DependencyInjection.WCF
{
    public class UnityInstanceProvider : IInstanceProvider
    {
        public UnityInstanceProvider()
            : this(null) { }

        public UnityInstanceProvider(Type type)
        {
            ServiceType = type;
            Container = new UnityContainer();
        }

        public IUnityContainer Container { getset; }

        public Type ServiceType { getset; }

        public object GetInstance(InstanceContext instanceContext, Message message)
        {
            return Container.Resolve(ServiceType);
        }

        public object GetInstance(InstanceContext instanceContext)
        {
            return GetInstance(instanceContext, null);
        }

        public void ReleaseInstance(InstanceContext instanceContext, object instance) { }
    }
}

In the UnityServiceBehavior constructor a new UnityInstanceProvider is created and the Unity Container from DIWrapper.Container is assigned as the Container. The GetInstance method then calles Container.Resolve(ServiceType) to resolve an instance of the required type. This is where Unity resolves the dependencies and returns an instance of the Service type with the dependencies satisfied. WCF can now happily use this instance for the service.

Add the dependency to the service

At present the ProductService does not have any dependencies so I will add one to a product repository from where we will return some data. The Prodcut service not looks like this:

using System.ServiceModel.Activation;
using WcfServiceApplication.Contract;
using WcfServiceApplication.Domain;
using WcfServiceApplication.Repository.Interface;

namespace WcfServiceApplication.Implementation
{
    public class ProductService : IProductService
    {
        private readonly IProductRepository productRepository;

        public ProductService(IProductRepository productRepository)
        {
            this.productRepository = productRepository;
        }

        public Product GetProductById(int productId)
        {
            return productRepository.GetProductById(productId);
        }
    }
}

Note that there is no paramaterless constructor, so this service can no longer be resolved via the standard WCF ServiceHost. I have kept the service implementation simple and just moved the logic that was there to the new repository.

using WcfServiceApplication.Domain;
using WcfServiceApplication.Repository.Interface;

namespace WcfServiceApplication.Repository
{
    public class ProductRepository : IProductRepository
    {
        public Product GetProductById(int productId)
        {
            return new Product
            {
                Id = productId,
                Name = "Product " + productId,
                Description = "A nice product"
            };
        }
    }
}

Configure Unity in Global.asax

Finally we need to configure Unity in the Global.asax of the ServiceHost. We are going to tell Unity to give us back a ProductRepository instance whenever a constructor is expecting an IProductRepository.

using System;
using Microsoft.Practices.ServiceLocation;
using Microsoft.Practices.Unity;
using WcfServiceApplication.DependencyInjection;
using WcfServiceApplication.Repository;
using WcfServiceApplication.Repository.Interface;

namespace WcfServiceApplication
{
    public class Global : System.Web.HttpApplication
    {
        protected void Application_Start(object sender, EventArgs e)
        {
            ConfigureUnity();
        }

        protected void Session_Start(object sender, EventArgs e){}

        protected void Application_BeginRequest(object sender, EventArgs e){}     

        protected void Application_AuthenticateRequest(object sender, EventArgs e){}

        protected void Application_Error(object sender, EventArgs e){}

        protected void Session_End(object sender, EventArgs e){}

        protected void Application_End(object sender, EventArgs e){}

        private void ConfigureUnity()
        {
            // Configure common service locator to use Unity
            ServiceLocator.SetLocatorProvider(() => new UnityServiceLocator(DIWrapper.Container));

            // Register the repository
            DIWrapper.Container.RegisterType<IProductRepositoryProductRepository>();           
        }
    }
}

Now we are all ready to run the service. In the next post I will look at using Unity to resolve Channels for the MVC client so we can just inject the service interfaces into the MVC controller, removing the need to call the Channel factory code from the previous post each time we want a new Channel to consume a service.

Posts in this series: