Managing parent and child collection relationships in Entity Framework – What is an identifying relationship anyway?

Some of the posts on this site are more as a reminder to myself about some problem that I have encountered and the solution I have found for that problem. This is one such post. The issue addressed below is fairly common and has been described and solved more comprehensively elsewhere but I have added this so I can find it again in the future should the need arise. If it proves helpful to anyone else then all the better. 

It is often desirable to have a parent entity that has a collection of child entities. For example a Order entity may have a collection of OrderLine entities, or a User entity may have a collection of Role entities. These example may look similar but they have a subtle difference that may cause some behavioural issues in some circumstances.

In the first example the order lines belong to an order and it doesn’t make sense for them to exist on their own. In the second example a Role can exist independently from the User.

This is the difference between an identifying and non-identifying relationship. Another example from Stack Overflow:

A book belongs to an owner, and an owner can own multiple books. But the book can exist also without the owner and it can change the owner. The relationship between a book and an owner is a non identifying relationship.

A book however is written by an author, and the author could have written multiple books. But the book needs to be written by an author it cannot exist without an author. Therefore the relationship between the book and the author is an identifying relationship.

Whatever the type of relationship it would be nice to be able to just add and remove child items to the parent collection and let EF worry about all the bothersome adding and removing from the Database.

Adding to the collection works correctly in both cases and if you have a non identifying relationship (your child foreign key is nullable) you will also be fine when removing items from the collection.

If you have only implied an identifying relationship (your child primary key is on the table Id field and your child foreign key is non nullable) you are likely to be in for a world of pain trying to remove items from the child collection.

When you remove an item from a child collection with the above kind of relationship Entity Framework tries to update the child entity foreign key to null. This causes problems as the foreign key is non-nullable and you get this System.InvalidOperationException when the context saves:

The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted.

The way to stop this from happening is to make the identifying relationship explicit by specifying the foreign key field of the child to be part of its primary key. I know that in the ORM world composite keys are not really in vogue, but this make things explicit to Entity Framework that the child cannot exist without its parent and so should be deleted when it is removed from the parents collection.

How do I specify a composite key?

To set the composite keys simply annotate the entity (or use the fluent API as part of OnModelCreating in the Context). If you have an OrderLine entity that has an identifying relationship with an Order entity the OrderLine entity should be annotated like this:

public class OrderLine
    [KeyColumn(Order = 0), DatabaseGenerated(DatabaseGeneratedOption.Identity)] 
    public int Id { getset; }

    public string Description { getset; }

    public Decimal Cost { getset; }

    [KeyForeignKey("Order"), Column(Order = 1)]
    public int PupilId { getset; }

    public virtual Order Order { getset; }

Getting started with automated acceptance tests in an Agile project

Automated testing can bring great benefits to an Agile project but it is not always clear how to best integrate the processes and practices effectively. In this post I will outline a few ideas that might help, along with some pitfalls, based on experience gained from my own projects.

What is automated acceptance testing?

I am using the term automated acceptance tests to mean tests that exercise a system from it’s most external interface. For the purposes of this post I will be mainly talking about tests that exercise the user interface but it could also mean tests that exercise an externally visible API such as a web service. The important thing about the tests is that they test the expected behaviour of the system, ensuring that the desired functionality is met. This builds an executable specification for the system.

What does it bring to an Agile project?

In this post I am going to use terms that are from Scrum in particular, but the points covered are applicable to any Agile process (and more than likely non Agile processes as well).

Agile projects are focused on meeting the needs of their customers (Product Owner in Scrum terms) and one of the ways this is done it to use acceptance criteria to define the expected behaviour of a particular user story. It is very valuable to be able to prove that that the acceptance criteria have been satisfied, whilst building a very comprehensive suite of automated regression tests. A suite of automated acceptance tests gives incredible confidence when building the system, especially in Agile developments where there may be refactoring and changes of direction as the changing needs of the Product Owner are met.

Where does automated testing fit into the process?

One of the most important things I have learnt from using automated acceptance testing on an Agile project is that it has to be a first class part of the development process. Depending on the maturity of your team this may mean having an explicit step or task to ensure it gets done, or having it as part of your definition of done. Either way it needs to be given the same level of importance as the other practices. A story cannot be done without automated tests that at least cover the acceptance criteria of the story. It is almost certain that when time is tight and people are under pressure that the automated tests will be the first things to be skipped. This is almost always a mistake, and one which I have learnt the hard way. Missing acceptance tests will come back to haunt you. Include them in your process and account for them in your planning activities.

Who writes the automated tests?

The creation of automated acceptance tests requires input from all the members of an Agile team. The roles and make up  in your Agile team may differ, but typically acceptance criteria will be generated with the input of the product owner and analyst (and maybe also the tester in some teams) who will also create specific scenarios that will be automated. As the tests are automated there is obviously some development work in order to perform the automation. There are ways to gain a lot of reuse form a test suite and to reduce the amount of additional development required to extent the suite. I will talk about some of these when looking at some specific tools and technologies in a later section.

What does it mean to ‘traditional’ testing for the project?

In recent years I have become more convinced that it is no longer possible to speak about the traditional silo’d roles in software development. The roles of analyst, developer and tester continue to become blurred as roles change and responsibilities overlap. The creation of automated acceptance tests require a little bit of all these traditional disciplines. Who will be involved in your team depends on the skills and abilities of the team members. To make automated testing work in your team it will be necessary to let go of the idea that developers don’t do testing, or that testers don’t write code.

That said, automated acceptance tests are not meant as a replacement to traditional exploratory testing or to undermine the role of traditional testing in any way. Instead they should be seen as complimentary.

This is a great post that discusses the aims of automated and exploratory testing:

Automated vs. exploratory – we got it wrong!

Automated acceptance testing as part of the build process

To get the most out of automated acceptance tests they need to be part of your Continuous Integration build. Acceptance tests are never going to be fast running, and this can pose some challenges to getting the best feedback from your tests. As an indication, my current project has about 300 Selenium based tests and takes about an hour to run. We are 7 sprints in and the number of tests is going to get much higher. At present we have a fast CI build running only the unit tests to give instant feedback, followed by an integration build containing all the acceptance tests that runs after a successful CI build. We are finding that the feedback is a bit slow and are planning to introduce the concept of a ‘current’ integration build that will run a subset of the acceptance tests that have been tagged as being in the area which fast feedback is required (i.e. area in which work is being undertaken). The full set of tests will continue to run at their slower cadence.

Using the right tools

My tools of choice are SpecFlow and Selenium Web Driver, but there are others that perform a similar function. It is important to understand the tools and the best way to use them. When talking about automated acceptance tests, and specifically using Selenium to drive the browser, I often encounter people who have tried to do automation and then have stopped as the tests become brittle and the cost of maintenance becomes too high.  Here are a few tips from my experience of using automated acceptance tests on a large project:

  • Aim to get a lot of reuse from your SpecFlow scenarios. Use the features built into SpecFlow, such as the Scenario.Context.Current and it’s DI capabilities, to make steps that can be reused across many features. If done correctly you will find that it is very easy to add new tests without writing any new steps at all.
  • Us the page object pattern to encapsulate the logic for the web pages. This is the one of the single biggest factors to ensuring your test suite is maintainable. Without page object you will find yourself fixing lots of tests due to a small change to a single page.
  • Thing about the wider ecosystem. We are using Selenium Server to give us the ability to run tests across multiple browsers.
  • Learn about you tools to get the best from them. Writing the automation will be a lot more pleasurable of you understand how things are working.

The posts below discuss how to make a start writing maintainable automated acceptance tests:

Behavioural testing in .Net with SpecFlow and Selenium (Part 1)

Behavioural testing in .Net with SpecFlow and Selenium (Part 2)

Managing SpecFlow acceptance test data with Entity Framework


  • Automated acceptance tests are about building an executable specification for your system.
  • The executable specification is very useful to to prove the acceptance criteria have been met and to give confidence that changed have not broken functionality.
  • Make automated tests part of your process. Do not skimp on them, even if you are under pressure to get things done. Add them to your definition of done.
  • Creating the tests is an activity for all the team, although traditional roles and responsibilities may be blurred.
  • Automated tests are in addition to traditional exploratory style testing and are not meant as a replacement.
  • Get the most from your tests by choosing a sensible CI strategy to give the best feedback
  • Choose the right tools for the job, and spend time learning to get the best from them
  • Become an advocate of automated testing in your organisation

Managing SpecFlow acceptance test data with Entity Framework

A look into how Entity Framework can be used to simplify the management of data for your acceptance test scenarios.

In a previous set of posts I looked at how it is possible to use SpecFlow and Selenium to create automated acceptance tests driven via the browser.

An important part of any acceptance or integration test is the test data. It is important that the data for each test is isolated from any other test. Often these tests require complicated data setup which can add additional complication and maintenance to to the test suite. This post assumes that the system under test is using a SQL database.

Traditional ways to manage test data

There are a few ways to manage the test data for acceptance tests. Probably the most widely used are:

  • Run some SQL scripts as part of the test setup and tear-down – Can be effective but means extra maintenance of the scripts is required. This type of script can be prone to errors as they are built outside of the type safety that the rest of the system benefits from.  This approach also requires a mechanism to run the scripts when the tests run.
  • Use a standard set of data for all tests – the main issue with this approach is that it may not allow for proper isolation of the test data. It is possible to use transactions or some similar mechanism to allow for this, but sometimes it is only the order of the running tests that ensures the correct data is available for the tests.

There are of course other ways, and many variations on the theme.

A better approach..?

It would be better if it were possible to add the data as part of the test setup in a type-safe way, using the language of the system without the need for external scripts.

If you are using an ORM in your application, it is very likely that you already have the tools needed to do just this. In this post I am going to use Entity Framework 5, but you could just as easily use nHibernate or something else to get the same result.

An example

Here is the SpecFlow scenario we are going to automate.

Scenario: Search for events in a region and display the results
	Given that the following regions exist
	| Id | Name   |
	| 1  | North  |
	| 2  | South  |
	| 3  | London |
	And the following events exist
	| Code    | Name                              | Description                                   | Region Id |
	| CR/1001 | Crochet for Beginners             | A gentle introduction into the art of crochet | 1         |
	| CH/3001 | Cat Herding                       | A starter session for the uninitiated         | 3         |
	| CH/3002 | Cat Herding - Advanced techniques | Taking it to the next level                   | 3         |
	When I navigate to the event search page
	And I select the "London" region 
	And perform a search
	Then The search results page is displayed
	And the following results are displayed
	| Code    | Name                              | Description                           | Region |
	| CH/3001 | Cat Herding                       | A starter session for the uninitiated | North  |
	| CH/3002 | Cat Herding - Advanced techniques | Taking it to the next level           | North  |

It is only the first two steps of the scenario that we are interested in for this post. Even before we start satisfying the first step we should make sure that the database is in the correct state to start adding the data for the test.

To do this we can use a BeforeScenario step. As the name suggests, this step is called before each scenario is run. You will notice that our scenario had a tag of @eventSearch. The same tag will be used by the BeforeScenario attribute to ensure that the BeforeScenario method is only triggered by scenarios that have the @eventSearch tag. This becomes important when you have multiple feature files in the same project.

public void Setup()
    using (var context = new CrazyEventsContext())
        context.Database.ExecuteSqlCommand("DELETE FROM Events DBCC CHECKIDENT ('CrazyEvents.dbo.Events',RESEED, 0)");
        context.Database.ExecuteSqlCommand("DELETE FROM Regions DBCC CHECKIDENT ('CrazyEvents.dbo.Events',RESEED, 0)");

We are using the ExecuteSqlCommand method to clear the contents of the tables and reseed the identity columns of the tables. If the tables are not used as part of a foreign key relationship it is possible to use TRUNCATE TABLE.

The next step is to use Entity Framework to populate the region and event details from the SpecFlow table. Although I have these steps as part of a scenario, you could also use a Background step to have the data populated at the start of each scenario.

[Given(@"that the following regions exist")]
public void GivenThatTheFollowingRegionsExist(Table table)
    var regions = table.CreateSet<Region>();
    using (var context = new CrazyEventsContext())
        foreach (var region in regions)
[Given(@"the following events exist")]
public void GivenTheFollowingEventsExist(Table table)
    var events = table.CreateSet<Event>();
    using (var context = new CrazyEventsContext())
        foreach (var crazyEvent in events)

SpecFlow has some table helper extension methods. I am using table.CreateSet(T) to create an IEnumerable of the specified type T from the data in the table. For this to work your table column names need to be the same as the properties of your entity (you can use spaces for readability if you like).

After I have the list of entities, I simply use the Entity Framework context to add each entity to the context in turn. Remember to call SaveChanges() to perform the update.

And that is all there is to it. The nice thing about this method of managing test data is that the data required for the test is specified as part of the test itself. Everyone can see what data is expected to be present when the test is run.

The easy way to run Selenium and SpecFlow tests in a TeamCity build with IIS Express

The series in full:

In Behavioural testing in .Net Part 1 and Part 2 I looked at creating behavioural acceptance tests with SpecFlow and Selenium. At some point after you have created your tests you will probably want to run them as part of a Continuous Integration build, or a nightly build or similar on a CI server such as TeamCity.

As the tests rely on browser interaction they obviously need the website to be running on the build server in order for the tests to function. There are several ways to get a web application up and running on the build server as part of the build, Web Deploy is probably one of the best.

However there are times you might want to get things up and running quickly as part of a spike or to just try something out. I am not suggesting that this is necessary enterprise strength but it may well be enough depending on your requirements.

This is not a TeamCity tutorial, so I am going to focus on the build steps required.

Step 1 – Build the project

This is fairly obvious. You cannot do much without a successful build.

Step 2 – Start the web application with IIS Express

IIS Express is a lightweight web server that is generally used for development purposes. It can be used in various ways and with different types of web application. For more information on how it can be used see this page. The interesting bit is that IIS Express can launch a web application directly from a folder from the command promp, and can be used for various types of application including ASP.Net, WCF and even PHP.

A command line runner is in order.

The text for the custom script is

start C:\"Program Files (x86)"\"IIS Express"\iisexpress.exe /path:"\CrazyEvents" /port:2418

This tells TeamCity to start a new command window and run the IIS Express executable. The /path: parameter tells IIS Express which folder contains the web application. I have used a TeamCity parameter to specify the checkout directory. You need to make sure this points to the directory of your web application. If it is not correct the build log will let you know exactly where it is pointing and you can adjust it as required.

The /port: parameter specifies the port that the website will run on. This needs to be the same as the port specified in your tests.

Just a quick not about the script. TeamCity has some peculiar behaviour regarding double quotes in the command strings. It took a little while for me to get it right. If you have problems but it looks correct just double check the quote placement.

As another side note, you will not be able to see the command window when the step runs, but it will be running and you should be able to navigate to the website on the build sever.

Step 3 – Running the tests

Run the tests in the usual way with the correct build runner.

Step 4 – Stop IIS Express

The final step is to stop IIS Express. We will use another command line runner.

The custom script is:

taskkill /IM IISExpress.exe /F

which forces the IISExpress instance to close. Once again, you cannot see this in action but it is working in the background.

And that is all there is to it, quick and easy. Behavioural testing doesn’t have to mean a painful setup and teardown experience.

Behavioural testing in .Net with SpecFlow and Selenium (Part 2)

In this series of posts I am going to run through using Selenium and SpecFlow to test .Net applications via the user interface. I will talk about the situations this type if testing can be used, and how we can create effective, non-brittle test with the help of the Page Objects Pattern.

The series in full:

In the first post in this series I looked at creating a scenario with SpecFlow and generating some empty step definitions. In this post I will implement the steps using Selenium in order to make the scenario test pass.


Selenium is a web automation framework. In simple terms it allows you to drive a browser via code. Selenium can be used in many browser automation situations, but one of  it’s main uses is for automated testing of web applications.

When it comes to testing the behaviours of web pages there has traditionally been a problem where the testing code becomes tied to the implementation details of the page. This can make the tests brittle as a change to the page implementation details, such as a change to a field name or Id, will break the test even if the behaviour of the page has not changed.

In order to protect against this it is desireable to encapsulate the implementation details of a page under test in a single place, so that if there is an implementation change there is only a single place to change, instead of having to fix all the separate bits which rely on those details.

Separating the behaviour of a page from its implementation details is the purpose of the Page Object pattern. A page object is simply a class that encapsulates the implementation details for a web page, allowing the automation to focus only on behaviour.

Implementing the first scenario

The first test is all about the Search Page so it is the Search Page page object that is first to be created. I find it easiest to work in a TDD style workflow, using the SpecFlow steps to guide me towards the implementation one step at a time. Lets have a look at the implementation:

public class CrazyEventsSearchPage
    [FindsBy(How = How.Id, Using = "region")]
    private IWebElement regionDropdown;

    [FindsBy(How = How.Name, Using = "Submit")]
    private IWebElement submitButton;

    private static IWebDriver driver;

    public static CrazyEventsSearchPage NavigateTo(IWebDriver webDriver)
        driver = webDriver;
        var searchPage = new CrazyEventsSearchPage();
        PageFactory.InitElements(driver, searchPage);
        return searchPage;

    public void SelectRegionDropdown()

    public IList<string> GetAllPossibleRegions()
        var selectList = new SelectElement(regionDropdown);
        var options = selectList.Options;

        return options.Select(webElement => webElement.Text).ToList();

Remember that the job of a page object is to encapsulate the implementation details of a web page. Selenium provided some helper for working with page objects. To use them make sure you have installed the Selenium Webdriver Support Classes into your test project via NuGet. PageFactory.InitElements(driver, searchPage) take a page object, in this case searchPage and ensures that all the elements are populated with the details from the web page. In this case it will populate the regionDropdown and submitButton web elements ready to be used.

It is the methods and fields of the page object that encapsulate the implementation details. If the id of the region needs to be changed, only the WebDriver FindsBy attribute that would need to be changed. If submitting the form needed to do something else, only a single change would be required.

The NavigateTo method is used as the entry point into the web site. The other methods are used by the SpecFlow steps.

public class EventSearchSteps
    private CrazyEventsSearchPage searchPage;
    private CrazyEventsSearchResultsPage searchResultsPage;
    private IWebDriver driver;

    public void Setup()
        driver = new FirefoxDriver();

    public void TearDown()

    [Given(@"that I want to search for an event by region")]
    public void GivenThatIWantToSearchForAnEventByRegion()
        searchPage = CrazyEventsSearchPage.NavigateTo(driver);

    [When(@"a list of possible regions is presented")]
    public void WhenAListOfPossibleRegionsIsPresented()

    [Then(@"the list should contain ""(.*)"", ""(.*)"", ""(.*)"" and ""(.*)""")]
    public void ThenTheListShouldContainAnd(string p0, string p1, string p2, string p3)
        var regions = searchPage.GetAllPossibleRegions();

    [Then(@"""(.*)"" should be the default value")]
    public void ThenShouldBeTheDefaultValue(string p0)
        var selectedRegion = searchPage.GetSelectedRegion();

I have added some setup and teardown methods to ensure that the web driver is initialised and shut down correctly. With all the implementation in place the scenario now passes.

Another scenario

Its time to add another scenario:

Scenario: Search for events in a region and display the results
	Given that I want to search for an event by region
	When I select the "London" region
	And perform a search
	Then The search results page is displayed
	And the following results are displayed
	| Event Code | Event Name                        | Region | Description                           |
	| CH/3001    | Cat Herding                       | London | A starter session for the uninitiated |
	| CH/302     | Cat Herding - Advanced techniques | London | Taking it to the next level           |

This scenario uses a SpecFlow table to define the expected results. Gherkin has a lot of powerful features to make the most of defining scenarios. The steps definitions were generated in the same way as the first scenario. I created another page object for the search results:

public class CrazyEventsSearchResultsPage
    private static IWebDriver driver;

    [FindsBy(How = How.Id, Using = "searchResults")] private IWebElement searchResults;

    public CrazyEventsSearchResultsPage(IWebDriver webDriver)
        driver = webDriver;

    public bool IsCurrentPage()
        return driver.Title == "Crazy Events - Search Results";

    public List<List<string>> GetResults()
        var results = new List<List<string>>();
        var allRows = searchResults.FindElements(By.TagName("tr"));
        foreach (var row in allRows)
            var cells = row.FindElements(By.TagName("td"));
            var result = cells.Select(cell => cell.Text).ToList();

        return results;

This page object doesn’t have a direct entry point, as this page cannot be reached directly without a search being performed. Instead it is reached from the search page object. I have added some extra functionality for performing the search and initialising the search results page object. We will see these in a moment.

And the step definitions for the new steps. Remember that any identical steps share an implementation.

[When(@"I select the ""(.*)"" region")]
public void WhenISelectTheRegion(string region)

[When(@"perform a search")]
public void WhenPerformASearch()
    searchResultsPage = searchPage.SubmitSearch();

[Then(@"The search results page is displayed")]
public void ThenTheSearchResultsPageIsDisplayed()

[Then(@"the following results are displayed")]
public void ThenTheFollowingResultsAreDisplayed(Table table)
    var results = searchResultsPage.GetResults();
    var i = 1; //ignore the header

    foreach (var row in table.Rows)
        Assert.IsTrue(results[i].Contains(row["Event Code"]));
        Assert.IsTrue(results[i].Contains(row["Event Name"]));

The additional methods for the search page object are:

public string GetSelectedRegion()
            var regionElement = new SelectElement(regionDropdown);
            return regionElement.SelectedOption.Text;

        public void SelectRegion(string region)
            var regionElement = new SelectElement(regionDropdown);

        public CrazyEventsSearchResultsPage SubmitSearch()

            var searchResultsPage = new CrazyEventsSearchResultsPage(driver);
            PageFactory.InitElements(driver, searchResultsPage);

            return searchResultsPage;

The SubmitSearch() method initialises the search results page object ready to be used in the scenario steps.

In conclusion…

Page objects are a powerful way to overcome the traditional problems with automated browser tests. When Selenium is coupled with SpecFlow they allow for a TDD style workflow that can give true outside-in development.

Another useful side effect of using SpecFlow scenarios is that the steps create an API for the application. Once the steps have been created it is possible to create new scenarios without any coding at all, which means that it is accessible to members of the team who might have otherwise found implementing tests a bit too developer centric.

Even the implementation of the page objects is not generally too complicated once the pattern has been established.  I appreciate that there is a bit more of a learning curve but I think this type of testing offers far more that the record and playback type behavioural tests and would be a useful skill for all team members to have.

Finally, as with all code, do not hesitate to refactor the scenarios, steps and page objects. There are lots of things that could be refactored in this simple example. A couple of the obvious things are the hard coded URL and logic that is used to get the data from the search results page. I will leave these as an exercise for the reader.

The series in full:

Behavioural testing in .Net with SpecFlow and Selenium (Part 1)

In this series of posts I am going to run through using Selenium and SpecFlow to test .Net applications via the user interface. I will talk about the situations this type if testing can be used, and how we can create effective, non-brittle test with the help of the Page Objects Pattern.

The series in full:

I have consciously avoided using the term Behaviour Driven Development (BDD) in the title as this post isn’t really about BDD as such. It is about using some tools in a way that could be used in many situations, with BDD being just one of them. I want to discuss how to use the BDD Framework SpecFlow and the web automation framework Selenium to create tests that test an application via it’s web user interface.

When to use this approach

You could use this approach in several situations. In any situation the idea is to create an executable specification and create automated testing for your code base. The two most common situations are:

  1. Automated acceptance tests – lets call this approach ‘test afterwards’. There is often a desire to create acceptance tests (or integration tests or smoke tests or whatever…) for an application. Using SpecFlow and Selenium you can create tests that test the application as a user would actually interact with it. A suite of acceptance tests is a worthwhile addition to any application.
  2. Behaviour Driven Development style – This is the ‘test before’ approach. In Test Driven Development with MVC I talked about how a true BDD style would ideally test user interaction with the application. These tools allow you to do just that. The workflow would be very similar to the outside-in approach I describe in the post, only you would create scenarios in SpecFlow instead of writing unit tests. Of course there are tons of approaches when it comes to BDD, this being  just one of them. Liz Keogh has a useful summary and introduction to BDD that is worth a look if you want to know more about it.

In this post I am going to use a test afterwards approach so that I can focus on the tooling.

Installing SpecFlow and Selenium

I have created a new folder to house my acceptance tests. Installing Selenium is easy, simply grab the latest version from NuGet (you are using NuGet, right?). Make sure to select Selenium WebDriver. For SpecFlow grab the latest version from NuGet and then get the Visual Studio integration from the Visual Studio Gallery. This will add some extra VS tooling we will use later.

The first scenario

I am going to revisit the event search scenario’s from Test Driven Development with MVC and expand on them a little. The first thing is to add a SpecFlow feature file to the project. Right click on the project and select New item.. Add a new SpecFlow feature file. I have called mine EventSearch.feature. The feature file has feature,which is essentially a user story, and a number of scenarios in the Given When Them format.

The feature files are written in the Gherkin language, which is intended to be a human readable DSL for describing business behaviour.

Lets start populating the feature file with the first scenario.

Feature: EventSearch
	In order to find an event to participate in
	As a potential event delegate
	I want to be able to search for an event

Scenario: Event Search Criteria should be displayed
	Given that I want to search for an event by region
	When I select to choose a region
	Then a list of possible regions is presented
	And the list should contain "All", "North", "South" and "London"
	And "All" should be the default value

The next step is to create the bindings that allows us to wire up the Gherkin feature to our application’s code. Before we do that we just need to tell SpecFlow to use Visual Studio’s testing framework MSTest instead of the default NUnit. This is done by adding a line  of configuration to your app.config, you will find the SpecFlow section already exists:

<!-- For additional details on SpecFlow configuration options see -->

    <unitTestProvider name="MsTest.2010"/>

As a brief aside, you will notice that the feature file has a code behind .cs file that has been created by SpecFlow. If you have a look inside you will see that the code is a rather awkward looking MSTest test class. It is used to run the Scenario’s that are described in the feature file and implemented in the binding step definitions we are about to create. It is not pretty, but it is not intended to be read or edited by a user as it is generated by SpecFlow when the feature file is saved. SpecFlow will generate the correct type of testing hooks for the test framework you are using.

Create the step definitions

The step definitons are where we add code to perform the tests. The step defintions are defined in a file with a [Binding] attribute. When we run the feature the test runner run the test defined in the feature code-behind which is bound to these steps by SpecFlow. The easiest way to create these bindings is to use SpecFlows Visual Studio tooling. Right click on the scenario in the feature file and select “Generate step definitions”. This brings up a dialogue box that allows you to create the necessary bindings to wire up the test. I created the bindings in a new class called EventSearchSteps. The generated bindings look like this:

using System;
using TechTalk.SpecFlow;

namespace CrazyEvents.AcceptanceTests
    public class EventSearchSteps
        [Given(@"that I want to search for an event by region")]
        public void GivenThatIWantToSearchForAnEventByRegion()

        [When(@"a list of possible regions is presented")]
        public void ThenAListOfPossibleRegionsIsPresented()

        [Then(@"the list should contain ""(.*)"", ""(.*)"", ""(.*)"" and ""(.*)""")]
        public void ThenTheListShouldContainAnd(string p0, string p1, string p2, string p3)

        [Then(@"""(.*)"" should be the default value")]
        public void ThenShouldBeTheDefaultValue(string p0)

The step definitions have been created with a pending implementation ready for us to complete. You will see that some parts of the features have generated steps that contains attributes with what looks like regular expressions, and that the methods have parameters. This is a feature of SpecFlow that allows for reuse of step definitions when the only thing that changes is a string value. We will see how this is used later. SpecFlow has lots of useful features like this, I would advise checking out the documentation to learn more about it.

Lets run the test and see what happens.

Resharper’s test runner shows that the test was inconclusive. This is not really a surprise as the test implementation is pending. Of course you can use Visual Studio’s built in test runner instead.

In the next post we will have a look at how we can implement the bindings with Selenium to automate browser interactions.

The series in full:

RESTful Web Services with Windows Communication Foundation

In this post I am going to investigate how it is possible to create RESTful web services with Windows Communication Foundation. As well as discussing how the services are created I am also going to dig a bit deeper into some of the issues that can arise with this type of approach, especially the Same Origin Policy for browser based service execution, and how we can work around it.

What is REST?

I don’t really want to go too deep into REpresentational State Transfer, as there are lots of excellent resources already written that will do a far better job of explaining it, however a brief recap of the basics can’t hurt.

REST is an architectural pattern for accessing resources via HTTP. A resource can represent any type of item a client application may wish to access. A client can be a user or could be a machine calling a web service to get some data.

If you were creating a traditional RPC style service to return the details of a product you may well have created something that is called like this:

In this example we have a verb (get) and a noun (Product). A RESTful version of this service would be something along the lines of:

In this example there is no explicit verb usage. Instead it is the HTTP verb GET that is used. In a RESTful web service it is the HTTP verbs (GET, POST, PUT, DELETE) that are used to interact with the resource.

GET – Used to return a resource in. It may be a simple product, as in the example above or it could be a collection of products.
POST – Used to create a new resource.
PUT – Used to update a resource. Sometimes PUT is also used to create a new resource is the resource is not present to update.
DELETE – Used to delete a resource.

The operations of a RESTful web service should be idempotent where appropriate to build resilience into the operations. It also makes sense that some verbs are not applicable to certain resources, depending on whether the resource is a collection of resources or a single resource (as we will see later).

REST is not a new idea – it was introduced by Roy Fielding in his 2000 doctoral dissataion, but has only recently been gaining serious traction against more widespread web service styles such as SOAP based XML. REST’s lightweight style and potential for smaller payloads have seen its popularity increase and major vendors have sought to add support for REST services into their products, including Microsoft.

A RESTful web service prescribes to the principles of REST, as well as specifying the MIME type of the data payload and the operations supported. REST does not specify the type of payload to be used and could really be any data format. In practice JSON has emerged as the defacto standard for RESTful services due to it’s lightweight nature.

Creating a WCF REST service

The WCF service will provide product information for some generic widgets.

The RESTful api for the service will support the following operations:


GET – Return a collection of products
POST – Add a new product to the product collection
PUT – Not supported, as we would want to edit the contents of the collection of products, not the collection itself.
DELETE – Not supported, as it would not really make sense to delete an entire collection of products

/Products/{productId} e.g. /products/12345

GET – Get the product represented by the product Id
POST – Not supported, as we cannot add anything at an individual product level
PUT – Update the product details.
DELETE – Delete the product

I am going to move failry quickly when it comes to creating the service. If you are not familiar with WCF (or need a refresher) I recommend that you have a look at the earlier series on creating WCF services.

I am going to make use of some of the new .Net 4.0 WCF features and not use a .svc file for the service. When creating RESTful services this has the advantage of making the URL look a little more ‘RESTful’, so instead of http://website/products.svc/1we can have http://website/products/1.

Lets create a new WCF application solution. I have thrown away the .svc file that the wizard has created for us. Rename the Interface to something sensible. I have called mine ProductService. Remember that this internal naming of the service contract and its operations in not exposed to the service once it is up and running. It makes sense to name the contract and operation in a sensible way for the project and them map it to a more REST friendly URL. We will see how to do this later in the post.

A RESTful service is in many respects similar to any other. It still follows the ABC of WCF. The service has an address at which the operations are invoked, it has a binding that specifies the way the service will communicate, and it also has a Contract that define the operations. The Contract for the ProductService looks like this:

public interface IProductService
    [WebInvoke(Method = "GET", UriTemplate = "", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
    List<Product> GetProducts();

    [WebInvoke(Method = "POST", UriTemplate = "", RequestFormat = WebMessageFormat.Json)]
    void CreateProduct(Product product);

    [WebInvoke(Method = "GET", UriTemplate = "/{ProductId}", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
    Product GetProductById(string productId);

    [WebInvoke(Method = "PUT", UriTemplate = "/{ProductId}", RequestFormat = WebMessageFormat.Json)]
    Product UpdateProductDetails(string productId);

    [WebInvoke(Method = "DELETE", UriTemplate = "/{ProductId}", RequestFormat = WebMessageFormat.Json)]
    Product DeleteProduct(string productId);

It is the WebInvoke attribute that identifies that a service operation is a available via REST. The method property defines the HTTP verb that is to be used. The UriTemplate property allows a templated URL to be specified for the operation. Parameters for the URL are specified between {}. The RequestFormat and ResponseFormat attributes specify the datatype for the exchange. We are using JSON.

As we are not using .svc files we must supply some routing so WCF knows how to route the requests to the service. Open up the Global.asax (or create one if the project doesn’t already have one) and locate the Application_Start method and add in a new route.

protected void Application_Start(object sender, EventArgs e)
    RouteTable.Routes.Add(new ServiceRoute("Products"new WebServiceHostFactory(), typeof(ProductService))); 

This route tells WCF that when a request comes in for Products it is routed to the ProductService. As an aside if you were using custom service host factory, for example if you were using an IOC container to to inject dependencies, you could specify it here instead of the standard WebServiceHostFactory.

It is the URI template and the routing that allows us to provide the RESTful web service addresses. Although we have a service operation called GetProductById that takes a Product Id, the resource is accessed via a URL in the form of /Products/12345.

Now the implementation is out of the way we are ready to go. The actual implementation of the service operations is not very interesting as it has no bearing in the REST service, as it is just the same as any implementation of a WCF service contract interface.

“But wait” I hear you all shout, “you haven’t configured the service in the web.Config. You are correct. In this example I am going to use another feature of .Net 4.0 WCF: Default Configuration. This allows WCF to automatically create  default configuration for Bindings, including endpoint, behaviors and protocol mappings without the need to get your hands dirty in the Web.Config. Of course in a production environment it is likely that you would want more control over your configuration and this can be added to the config as required.

Lets try the service

In order to test the service we need to call the service. A simple test harness is required. I created a simple html page using  jQuery to call out to the service:

<!DOCTYPE html>
<title>WCF RESTful Web Service Example</title>
<script type="text/javascript" src=""></script>
    <script type="text/javascript">

        var GetProductRequest = function () {
                type: "GET",
                url: "http://localhost:52845/products/",
                contentType: "application/json",
                dataType: "text",
                success: function (response) {
                error: function (error) {
                    console.log("ERROR:", error);
                complete: function () {



        WCF RESTful Web Service Example
        GET Response
        <pre id="getProductsResponse"></pre>

This simply uses jQuery’s ajax functionality to make a call to the service and output the response on the page. Lets run it. As another aside, if you are using ASP.NET Development Server when building and testing your services (and the chances are you probably are) then you will only be able to test GET and PUT methods, as others are not allowed and will return a 405 error. If you want to be able to test the PUT and DELETE methods I would recommend using IIS Express as your development web server.

I am using Chrome 21 and there appears to be a problem. If we look at the developer tools in Chrome (press F12) we can see that a request was made with the OPTIONS method, and it returned a status of 405 Method not allowed.  Also there is a message about the Origin nor being allowed by something called Access-Control-Allow-Origin. This is due to the Same origin policy.

The Same origin policy

The Same origin policy is a browser security concept that  prevent a client-side Web application running from one site (origin) from obtaining data retrieved from another site. This pertains mainly to cross site script requests. For them to be considered to be from the same origin they must have the same domain name, application layer protocol and port number.

Althouth the same origin policy is an important security concept with a lot of valid uses it does not help out legitimate situation. We need to be able to access the RESTful web service from a different origin.

Cross-origin resource sharing

Cross-origin resource sharing is a way of working with the restrictions that the same origin policy applies. In brief, the browser can use a Prefligh request (this is the OPTIONS method request we have already seen) to find out whether a resource is willing to accept a cross origin request from a given origin. The requests and responses are validated to stop man in the middle type attacks. If everything is okay the original request can be performed. Of course there is more to it than that, but that is all we need to know for now. More infomation is available in the
W3C spec for CORS.

As usual with browsers, CORS support is not universal, nor is it implemented in consistently across all browsers. In some browsers it is all but ignored (I am looking at you IE 8/9) We will need to ensure that the CORS support we add to the WCF service works in all situation. ‘When can I use Cross-Origin Resource Sharing‘  is a handy tool to see if your browser suppports CORS.

Dealing with an OPTION Preflight request in WCF

There are a couple of ways of dealing with CORS in WCF.

The first is some custom WCF Behaviours that can be applied to a service operation via an attribute. This approach has the advantage that it is possible to explicitly target which operations you want to be able to accept the cross domain requests, however it has quite an involved implementation.  A good overview can be found here.

I am going to focus on the second method, which is to intercept the incoming requests and check for the presence of the OPTIONS method and responding appropriately. I will show the code and them we can talk about how it works. Back to the Global.asax and the Application_BeginRequest method.

protected void Application_BeginRequest(object sender, EventArgs e)
    if (HttpContext.Current.Request.HttpMethod == "OPTIONS")
        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Methods""GET, POST, PUT, DELETE");
        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Headers""Content-Type, Accept");

As the method name suggests, this is called when a request is received. The first action is to add a Access-Control-Allow-Origin header to the response. This is used to specify URLs that can access the resource. If you are feeling generous and want any site (or origin) to access the resource you can use the * wildcard. The reason this header is outside of the check for the OPTIONS method is because sometimes CORS request do not send OPTIONS Prefligh requests for ‘simple’ requests (GET and POST). In this case they just send their origin and expect a response to be told if they can access the resource. This is dependent upon Browser CORS implementation. Remember when we first tried to call the service and we got an error saying that the Origin was not allowed by Access-control-Allow-Origin? That was because the response header did not contain this header with the correct entry.

The Cache-Control tell the requester not to cache the resource. The Access-Control-Allow-Methods specifies which http methods are allowed when accessing the resource. Access-Control-Allow-Headers specifies which headers can be used to make a request. Access-Control-Max-Age specifies how long the results of the Preflight response can be cached in the Preflight result cache.

There is a lot more information available about the CORS implementation. If you are looking for more information two of the best places are the W3C CORS Specification and the Mozilla Developer Network.

Lets give it another try.

Success. That’s all there is to it. Hopefully you will see that creating RESTful web services with WCF is easy. If you have any questions please feel free to leave a comment.