The foundations of agile – Lean Software Development, Theory of Constraints and Systems Thinking

During a recent brown bag lunch I gave a presentation on the foundations of agile. I talked about the some of the underlying philosophies and methodologies that have influence agile software development.

The presentation covered a little bit of history before looking at Lean Software Development (and it development from the Toyota Production System), The Theory of Constraints and Systems Thinking.

The presentation can be found here.

Remember to have a look at the slide notes, as it has some more information not included on the slides themselves.

Advertisements

Managing parent and child collection relationships in Entity Framework – What is an identifying relationship anyway?

Some of the posts on this site are more as a reminder to myself about some problem that I have encountered and the solution I have found for that problem. This is one such post. The issue addressed below is fairly common and has been described and solved more comprehensively elsewhere but I have added this so I can find it again in the future should the need arise. If it proves helpful to anyone else then all the better. 

It is often desirable to have a parent entity that has a collection of child entities. For example a Order entity may have a collection of OrderLine entities, or a User entity may have a collection of Role entities. These example may look similar but they have a subtle difference that may cause some behavioural issues in some circumstances.

In the first example the order lines belong to an order and it doesn’t make sense for them to exist on their own. In the second example a Role can exist independently from the User.

This is the difference between an identifying and non-identifying relationship. Another example from Stack Overflow:

A book belongs to an owner, and an owner can own multiple books. But the book can exist also without the owner and it can change the owner. The relationship between a book and an owner is a non identifying relationship.

A book however is written by an author, and the author could have written multiple books. But the book needs to be written by an author it cannot exist without an author. Therefore the relationship between the book and the author is an identifying relationship.

Whatever the type of relationship it would be nice to be able to just add and remove child items to the parent collection and let EF worry about all the bothersome adding and removing from the Database.

Adding to the collection works correctly in both cases and if you have a non identifying relationship (your child foreign key is nullable) you will also be fine when removing items from the collection.

If you have only implied an identifying relationship (your child primary key is on the table Id field and your child foreign key is non nullable) you are likely to be in for a world of pain trying to remove items from the child collection.

When you remove an item from a child collection with the above kind of relationship Entity Framework tries to update the child entity foreign key to null. This causes problems as the foreign key is non-nullable and you get this System.InvalidOperationException when the context saves:

The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted.

The way to stop this from happening is to make the identifying relationship explicit by specifying the foreign key field of the child to be part of its primary key. I know that in the ORM world composite keys are not really in vogue, but this make things explicit to Entity Framework that the child cannot exist without its parent and so should be deleted when it is removed from the parents collection.

How do I specify a composite key?

To set the composite keys simply annotate the entity (or use the fluent API as part of OnModelCreating in the Context). If you have an OrderLine entity that has an identifying relationship with an Order entity the OrderLine entity should be annotated like this:

public class OrderLine
{
    [KeyColumn(Order = 0), DatabaseGenerated(DatabaseGeneratedOption.Identity)] 
    public int Id { getset; }

    public string Description { getset; }

    public Decimal Cost { getset; }

    [KeyForeignKey("Order"), Column(Order = 1)]
    public int PupilId { getset; }

    public virtual Order Order { getset; }
}

Generating SpecFlow reports for your TeamCity builds

You may or may not be aware that SpecFlow has the ability to produce some reports giving information about the features and steps after a test run has finished. The test execution report contains details about the SpecFlow test features, scenarios and steps. The step definition report shows the steps and their statuses, including unused steps and steps with no implementation.

The SpecFlow documentation details how the reports can be generated using SpecFlow.exe at the command line. Whilst it is useful to be able to manually generate these reports, it would be better if we could generate them as part of the TeamCity build process and make the reports available from within the build details of TeamCity. This makes the reports more accessible to all team members and especially useful when showing your product owner the scenarios covered by your acceptance test suite.

Lets run through getting things up and running.

1. Running the tests using nunit-console.exe

In order to generate the SpecFlow reports you must first run the tests using nunit-console.exe in order to generate the correct output files to feed into the report generation. The SpecFlow documentation tells us this is the command we need to run.

nunit-console.exe /labels /out=TestResult.txt /xml=TestResult.xml bin\Debug\BookShop.AcceptanceTests.dll

As we need to run nunit-console.exe it is not possible to use TeamCity’s NUnit runner. There are a few different runner types we could run this command such as Command line or Powershell, but the one that best fits our needs is probably MsBuild, but as always the same results can be achieved in lots of different ways.

In Visual Studio create a build file. I have given mine the imaginative name of nunit.build

<?xml version="1.0" encoding="utf-8" ?>

<Project ToolsVersion="3.5" DefaultTarget="Compile"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

<PropertyGroup>
    <NUnitHome>C:/Program Files (x86)/NUnit 2.6.2</NUnitHome>
    <NUnitConsole>&quot;$(NUnitHome)\bin\nunit-console.exe&quot;</NUnitConsole>
    <testResultsTxt>&quot;$(teamcity_build_checkoutDir)/TestResult.txt&quot;</testResultsTxt>
    <testResultsXml>&quot;$(teamcity_build_checkoutDir)/TestResult.xml&quot;</testResultsXml>
    <projectFile>&quot;$(teamcity_build_checkoutDir)\CrazyEvents.AcceptanceTests\CrazyEvents.AcceptanceTests.csproj&quot;</projectFile>
</PropertyGroup>
 
<Target Name="RunTests">
  <Exec Command="$(NUnitConsole) /labels /out=$(testResultsTxt) /xml=$(testResultsXml) $(projectFile)"/>
</Target>  
    
</Project>

I am not an MSBuild expert and it took me a while to get the paths correctly escaped.  If you are getting build errors check the paths first. TeamCity has some properties that you can reference in your build files to give you access the build related things. I am using teamcity_build_checkoutDir to specify that the output from the build is placed in the root of the checkout directory.

In TeamCity your build step should look something like this:

SpecflowTeamCityNUit1

With the build file in place and the step configured you may want to give things a run to make sure everything is working as expected. When you do you will notice that there is something missing from TeamCity.

SpecflowTeamCityNUnit2

If you had used the built in NUnit test runner you would expect to see a Test tab in TeamCity with the details of the tests run as part of the build. The tab is missing because TeamCity doesn’t know about the tests been run by the MsBuild task. The next step will show how to get the test tab and results back.

2. Getting the test tab to show in TeamCity

Luckily for us the good people at TeamCity have created an NUnit addin that does just that. To use the addin in needs to be present in the NUnit addin folder. The best way to ensure the addin is present in to add a Copy task to the MSBuild target. The full build file now looks like this:

<?xml version="1.0" encoding="utf-8" ?>

<Project ToolsVersion="3.5" DefaultTarget="Compile"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <ItemGroup>
        <NUnitAddinFiles Include="$(teamcity_dotnet_nunitaddin)-2.6.2.*" />
    </ItemGroup>

    <PropertyGroup>
        <NUnitHome>C:/Program Files (x86)/NUnit 2.6.2</NUnitHome>
        <NUnitConsole>&quot;$(NUnitHome)\bin\nunit-console.exe&quot;</NUnitConsole>
        <testResultsTxt>&quot;$(teamcity_build_checkoutDir)/TestResult.txt&quot;</testResultsTxt>
        <testResultsXml>&quot;$(teamcity_build_checkoutDir)/TestResult.xml&quot;</testResultsXml>
        <projectFile>&quot;$(teamcity_build_checkoutDir)\CrazyEvents.AcceptanceTests\CrazyEvents.AcceptanceTests.csproj&quot;</projectFile>
    </PropertyGroup>

    <Target Name="RunTests">
        <MakeDir Directories="$(NUnitHome)/bin/addins" />
        <Copy SourceFiles="@(NUnitAddinFiles)" DestinationFolder="$(NUnitHome)/bin/addins" />
        <Exec Command="$(NUnitConsole) /labels /out=$(testResultsTxt) /xml=$(testResultsXml) $(projectFile)"/>        
    </Target>

</Project>

An addins directory is created and the two addin files are copied from TeamCity to the directory. It is important to specify the correct version of the addin files in the NUnitAddinFiles section for the version on NUnit you are using. With the build file checked in the next build will have the missing Test tab, and the test summary will be displayed on the run overview.

SpecflowTeamCityNUnit3

3. Running the SpecFlow reports

To run the SpecFlow report I have used another build task to make two calls to SpecFlow.exe with the files generated as output by NUnit. The final build file looks like this:

<?xml version="1.0" encoding="utf-8" ?>

<Project ToolsVersion="3.5" DefaultTarget="Compile"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <ItemGroup>
        <NUnitAddinFiles Include="$(teamcity_dotnet_nunitaddin)-2.6.2.*" />
    </ItemGroup>

    <PropertyGroup>
        <NUnitHome>C:/Program Files (x86)/NUnit 2.6.2</NUnitHome>
        <NUnitConsole>&quot;$(NUnitHome)\bin\nunit-console.exe&quot;</NUnitConsole>
        <testResultsTxt>&quot;$(teamcity_build_checkoutDir)/TestResult.txt&quot;</testResultsTxt>
        <testResultsXml>&quot;$(teamcity_build_checkoutDir)/TestResult.xml&quot;</testResultsXml>
        <projectFile>&quot;$(teamcity_build_checkoutDir)\CrazyEvents.AcceptanceTests\CrazyEvents.AcceptanceTests.csproj&quot;</projectFile>
        <SpecflowExe>&quot;C:\Users\Admin\Documents\Visual Studio 2010\Projects\CrazyEventsNew\packages\SpecFlow.1.9.0\tools\specflow.exe&quot;</SpecflowExe>
    </PropertyGroup>

    <Target Name="RunTests">
        <MakeDir Directories="$(NUnitHome)/bin/addins" />
        <Copy SourceFiles="@(NUnitAddinFiles)" DestinationFolder="$(NUnitHome)/bin/addins" />
        <Exec Command="$(NUnitConsole) /labels /out=$(testResultsTxt) /xml=$(testResultsXml) $(projectFile)"/>        
    </Target>

    <Target Name="SpecflowReports">
        <Exec Command="$(SpecflowExe) nunitexecutionreport $(projectFile) /xmlTestResult:$(testResultsXml) /testOutput:$(testResultsTxt) /out:&quot;$(teamcity_build_checkoutDir)/SpecFlowExecutionReport.html&quot;"/>
        <Exec Command="$(SpecflowExe) stepdefinitionreport $(projectFile) /out:&quot;$(teamcity_build_checkoutDir)/SpecFlowStepDefinitionReport.html&quot;"/>
    </Target>

</Project>

Now we just need to get the html output of the reports to show in TeamCity.

4. Displaying the SpecFlow reports as tabs in the build output

Again, the people at TeamCity have made this really easy. First it is necessary to set the required files as build artifacts in the build configuration.

SpecflowTeamCityNUnit4

The final part of the jigsaw is to specify the report html file artifacts to be used as report tabs. Head over to Report tabs section of the Administration page and specify the new report tabs.

SpecflowTeamCityNUnit5

Now your tabs will be visible in TeamCity.

SpecflowTeamCityNUnit6

Behavioural testing in .Net with SpecFlow and Selenium (Part 2)

In this series of posts I am going to run through using Selenium and SpecFlow to test .Net applications via the user interface. I will talk about the situations this type if testing can be used, and how we can create effective, non-brittle test with the help of the Page Objects Pattern.

The series in full:

In the first post in this series I looked at creating a scenario with SpecFlow and generating some empty step definitions. In this post I will implement the steps using Selenium in order to make the scenario test pass.

Selenium

Selenium is a web automation framework. In simple terms it allows you to drive a browser via code. Selenium can be used in many browser automation situations, but one of  it’s main uses is for automated testing of web applications.

When it comes to testing the behaviours of web pages there has traditionally been a problem where the testing code becomes tied to the implementation details of the page. This can make the tests brittle as a change to the page implementation details, such as a change to a field name or Id, will break the test even if the behaviour of the page has not changed.

In order to protect against this it is desireable to encapsulate the implementation details of a page under test in a single place, so that if there is an implementation change there is only a single place to change, instead of having to fix all the separate bits which rely on those details.

Separating the behaviour of a page from its implementation details is the purpose of the Page Object pattern. A page object is simply a class that encapsulates the implementation details for a web page, allowing the automation to focus only on behaviour.

Implementing the first scenario

The first test is all about the Search Page so it is the Search Page page object that is first to be created. I find it easiest to work in a TDD style workflow, using the SpecFlow steps to guide me towards the implementation one step at a time. Lets have a look at the implementation:

public class CrazyEventsSearchPage
{
    [FindsBy(How = How.Id, Using = "region")]
    private IWebElement regionDropdown;

    [FindsBy(How = How.Name, Using = "Submit")]
    private IWebElement submitButton;

    private static IWebDriver driver;

    public static CrazyEventsSearchPage NavigateTo(IWebDriver webDriver)
    {
        driver = webDriver;
        driver.Navigate().GoToUrl("http://localhost:2418/event/Search");
        var searchPage = new CrazyEventsSearchPage();
        PageFactory.InitElements(driver, searchPage);
        return searchPage;
    }

    public void SelectRegionDropdown()
    {
        regionDropdown.Click();
    }

    public IList<string> GetAllPossibleRegions()
    {
        var selectList = new SelectElement(regionDropdown);
        var options = selectList.Options;

        return options.Select(webElement => webElement.Text).ToList();
    }
}

Remember that the job of a page object is to encapsulate the implementation details of a web page. Selenium provided some helper for working with page objects. To use them make sure you have installed the Selenium Webdriver Support Classes into your test project via NuGet. PageFactory.InitElements(driver, searchPage) take a page object, in this case searchPage and ensures that all the elements are populated with the details from the web page. In this case it will populate the regionDropdown and submitButton web elements ready to be used.

It is the methods and fields of the page object that encapsulate the implementation details. If the id of the region needs to be changed, only the WebDriver FindsBy attribute that would need to be changed. If submitting the form needed to do something else, only a single change would be required.

The NavigateTo method is used as the entry point into the web site. The other methods are used by the SpecFlow steps.

[Binding]
public class EventSearchSteps
{
    private CrazyEventsSearchPage searchPage;
    private CrazyEventsSearchResultsPage searchResultsPage;
    private IWebDriver driver;

    [BeforeScenario()]
    public void Setup()
    {
        driver = new FirefoxDriver();
    }

    [AfterScenario()]
    public void TearDown()
    {
        driver.Quit();
    }

    [Given(@"that I want to search for an event by region")]
    public void GivenThatIWantToSearchForAnEventByRegion()
    {
        searchPage = CrazyEventsSearchPage.NavigateTo(driver);
    }

    [When(@"a list of possible regions is presented")]
    public void WhenAListOfPossibleRegionsIsPresented()
    {
        searchPage.SelectRegionDropdown();
    }

    [Then(@"the list should contain ""(.*)"", ""(.*)"", ""(.*)"" and ""(.*)""")]
    public void ThenTheListShouldContainAnd(string p0, string p1, string p2, string p3)
    {
        var regions = searchPage.GetAllPossibleRegions();
        Assert.IsTrue(regions.Contains(p0));
        Assert.IsTrue(regions.Contains(p1));
        Assert.IsTrue(regions.Contains(p1));
        Assert.IsTrue(regions.Contains(p1));
    }

    [Then(@"""(.*)"" should be the default value")]
    public void ThenShouldBeTheDefaultValue(string p0)
    {
        var selectedRegion = searchPage.GetSelectedRegion();
        Assert.IsTrue(selectedRegion.Contains(p0));
    }
}

I have added some setup and teardown methods to ensure that the web driver is initialised and shut down correctly. With all the implementation in place the scenario now passes.

Another scenario

Its time to add another scenario:

Scenario: Search for events in a region and display the results
	Given that I want to search for an event by region
	When I select the "London" region
	And perform a search
	Then The search results page is displayed
	And the following results are displayed
	| Event Code | Event Name                        | Region | Description                           |
	| CH/3001    | Cat Herding                       | London | A starter session for the uninitiated |
	| CH/302     | Cat Herding - Advanced techniques | London | Taking it to the next level           |

This scenario uses a SpecFlow table to define the expected results. Gherkin has a lot of powerful features to make the most of defining scenarios. The steps definitions were generated in the same way as the first scenario. I created another page object for the search results:

public class CrazyEventsSearchResultsPage
{
    private static IWebDriver driver;

    [FindsBy(How = How.Id, Using = "searchResults")] private IWebElement searchResults;

    public CrazyEventsSearchResultsPage(IWebDriver webDriver)
    {
        driver = webDriver;
    }

    public bool IsCurrentPage()
    {
        return driver.Title == "Crazy Events - Search Results";
    }

    public List<List<string>> GetResults()
    {
        var results = new List<List<string>>();
        var allRows = searchResults.FindElements(By.TagName("tr"));
        foreach (var row in allRows)
        {
            var cells = row.FindElements(By.TagName("td"));
            var result = cells.Select(cell => cell.Text).ToList();
            results.Add(result);
        }

        return results;
    }
}

This page object doesn’t have a direct entry point, as this page cannot be reached directly without a search being performed. Instead it is reached from the search page object. I have added some extra functionality for performing the search and initialising the search results page object. We will see these in a moment.

And the step definitions for the new steps. Remember that any identical steps share an implementation.

[When(@"I select the ""(.*)"" region")]
public void WhenISelectTheRegion(string region)
{
    searchPage.SelectRegion(region);
}

[When(@"perform a search")]
public void WhenPerformASearch()
{
    searchResultsPage = searchPage.SubmitSearch();
}

[Then(@"The search results page is displayed")]
public void ThenTheSearchResultsPageIsDisplayed()
{
    Assert.IsTrue(searchResultsPage.IsCurrentPage());
}

[Then(@"the following results are displayed")]
public void ThenTheFollowingResultsAreDisplayed(Table table)
{
    var results = searchResultsPage.GetResults();
    var i = 1; //ignore the header

    foreach (var row in table.Rows)
    {
        Assert.IsTrue(results[i].Contains(row["Event Code"]));
        Assert.IsTrue(results[i].Contains(row["Event Name"]));
        Assert.IsTrue(results[i].Contains(row["Region"]));
        Assert.IsTrue(results[i].Contains(row["Description"]));
        i++;
    }
}

The additional methods for the search page object are:

public string GetSelectedRegion()
        {
            var regionElement = new SelectElement(regionDropdown);
            return regionElement.SelectedOption.Text;
        }

        public void SelectRegion(string region)
        {
            var regionElement = new SelectElement(regionDropdown);
            regionElement.SelectByText(region);
        }

        public CrazyEventsSearchResultsPage SubmitSearch()
        {
            submitButton.Click();

            var searchResultsPage = new CrazyEventsSearchResultsPage(driver);
            PageFactory.InitElements(driver, searchResultsPage);

            return searchResultsPage;
        }

The SubmitSearch() method initialises the search results page object ready to be used in the scenario steps.

In conclusion…

Page objects are a powerful way to overcome the traditional problems with automated browser tests. When Selenium is coupled with SpecFlow they allow for a TDD style workflow that can give true outside-in development.

Another useful side effect of using SpecFlow scenarios is that the steps create an API for the application. Once the steps have been created it is possible to create new scenarios without any coding at all, which means that it is accessible to members of the team who might have otherwise found implementing tests a bit too developer centric.

Even the implementation of the page objects is not generally too complicated once the pattern has been established.  I appreciate that there is a bit more of a learning curve but I think this type of testing offers far more that the record and playback type behavioural tests and would be a useful skill for all team members to have.

Finally, as with all code, do not hesitate to refactor the scenarios, steps and page objects. There are lots of things that could be refactored in this simple example. A couple of the obvious things are the hard coded URL and logic that is used to get the data from the search results page. I will leave these as an exercise for the reader.

The series in full:

Behavioural testing in .Net with SpecFlow and Selenium (Part 1)

In this series of posts I am going to run through using Selenium and SpecFlow to test .Net applications via the user interface. I will talk about the situations this type if testing can be used, and how we can create effective, non-brittle test with the help of the Page Objects Pattern.

The series in full:

I have consciously avoided using the term Behaviour Driven Development (BDD) in the title as this post isn’t really about BDD as such. It is about using some tools in a way that could be used in many situations, with BDD being just one of them. I want to discuss how to use the BDD Framework SpecFlow and the web automation framework Selenium to create tests that test an application via it’s web user interface.

When to use this approach

You could use this approach in several situations. In any situation the idea is to create an executable specification and create automated testing for your code base. The two most common situations are:

  1. Automated acceptance tests – lets call this approach ‘test afterwards’. There is often a desire to create acceptance tests (or integration tests or smoke tests or whatever…) for an application. Using SpecFlow and Selenium you can create tests that test the application as a user would actually interact with it. A suite of acceptance tests is a worthwhile addition to any application.
  2. Behaviour Driven Development style – This is the ‘test before’ approach. In Test Driven Development with MVC I talked about how a true BDD style would ideally test user interaction with the application. These tools allow you to do just that. The workflow would be very similar to the outside-in approach I describe in the post, only you would create scenarios in SpecFlow instead of writing unit tests. Of course there are tons of approaches when it comes to BDD, this being  just one of them. Liz Keogh has a useful summary and introduction to BDD that is worth a look if you want to know more about it.

In this post I am going to use a test afterwards approach so that I can focus on the tooling.

Installing SpecFlow and Selenium

I have created a new folder to house my acceptance tests. Installing Selenium is easy, simply grab the latest version from NuGet (you are using NuGet, right?). Make sure to select Selenium WebDriver. For SpecFlow grab the latest version from NuGet and then get the Visual Studio integration from the Visual Studio Gallery. This will add some extra VS tooling we will use later.

The first scenario

I am going to revisit the event search scenario’s from Test Driven Development with MVC and expand on them a little. The first thing is to add a SpecFlow feature file to the project. Right click on the project and select New item.. Add a new SpecFlow feature file. I have called mine EventSearch.feature. The feature file has feature,which is essentially a user story, and a number of scenarios in the Given When Them format.

The feature files are written in the Gherkin language, which is intended to be a human readable DSL for describing business behaviour.

Lets start populating the feature file with the first scenario.

Feature: EventSearch
	In order to find an event to participate in
	As a potential event delegate
	I want to be able to search for an event

Scenario: Event Search Criteria should be displayed
	Given that I want to search for an event by region
	When I select to choose a region
	Then a list of possible regions is presented
	And the list should contain "All", "North", "South" and "London"
	And "All" should be the default value

The next step is to create the bindings that allows us to wire up the Gherkin feature to our application’s code. Before we do that we just need to tell SpecFlow to use Visual Studio’s testing framework MSTest instead of the default NUnit. This is done by adding a line  of configuration to your app.config, you will find the SpecFlow section already exists:

<specFlow>
<!-- For additional details on SpecFlow configuration options see http://go.specflow.org/doc-config -->

    <unitTestProvider name="MsTest.2010"/>
</specFlow>

As a brief aside, you will notice that the feature file has a code behind .cs file that has been created by SpecFlow. If you have a look inside you will see that the code is a rather awkward looking MSTest test class. It is used to run the Scenario’s that are described in the feature file and implemented in the binding step definitions we are about to create. It is not pretty, but it is not intended to be read or edited by a user as it is generated by SpecFlow when the feature file is saved. SpecFlow will generate the correct type of testing hooks for the test framework you are using.

Create the step definitions

The step definitons are where we add code to perform the tests. The step defintions are defined in a file with a [Binding] attribute. When we run the feature the test runner run the test defined in the feature code-behind which is bound to these steps by SpecFlow. The easiest way to create these bindings is to use SpecFlows Visual Studio tooling. Right click on the scenario in the feature file and select “Generate step definitions”. This brings up a dialogue box that allows you to create the necessary bindings to wire up the test. I created the bindings in a new class called EventSearchSteps. The generated bindings look like this:

using System;
using TechTalk.SpecFlow;

namespace CrazyEvents.AcceptanceTests
{
    [Binding]
    public class EventSearchSteps
    {
        [Given(@"that I want to search for an event by region")]
        public void GivenThatIWantToSearchForAnEventByRegion()
        {
            ScenarioContext.Current.Pending();
        }

        [When(@"a list of possible regions is presented")]
        public void ThenAListOfPossibleRegionsIsPresented()
        {
            ScenarioContext.Current.Pending();
        }

        [Then(@"the list should contain ""(.*)"", ""(.*)"", ""(.*)"" and ""(.*)""")]
        public void ThenTheListShouldContainAnd(string p0, string p1, string p2, string p3)
        {
            ScenarioContext.Current.Pending();
        }

        [Then(@"""(.*)"" should be the default value")]
        public void ThenShouldBeTheDefaultValue(string p0)
        {
            ScenarioContext.Current.Pending();
        }
    }
}

The step definitions have been created with a pending implementation ready for us to complete. You will see that some parts of the features have generated steps that contains attributes with what looks like regular expressions, and that the methods have parameters. This is a feature of SpecFlow that allows for reuse of step definitions when the only thing that changes is a string value. We will see how this is used later. SpecFlow has lots of useful features like this, I would advise checking out the documentation to learn more about it.

Lets run the test and see what happens.

Resharper’s test runner shows that the test was inconclusive. This is not really a surprise as the test implementation is pending. Of course you can use Visual Studio’s built in test runner instead.

In the next post we will have a look at how we can implement the bindings with Selenium to automate browser interactions.

The series in full:

RESTful Web Services with Windows Communication Foundation

In this post I am going to investigate how it is possible to create RESTful web services with Windows Communication Foundation. As well as discussing how the services are created I am also going to dig a bit deeper into some of the issues that can arise with this type of approach, especially the Same Origin Policy for browser based service execution, and how we can work around it.

What is REST?

I don’t really want to go too deep into REpresentational State Transfer, as there are lots of excellent resources already written that will do a far better job of explaining it, however a brief recap of the basics can’t hurt.

REST is an architectural pattern for accessing resources via HTTP. A resource can represent any type of item a client application may wish to access. A client can be a user or could be a machine calling a web service to get some data.

If you were creating a traditional RPC style service to return the details of a product you may well have created something that is called like this:

http://superproducts.com/products/getProduct?id=12345

In this example we have a verb (get) and a noun (Product). A RESTful version of this service would be something along the lines of:

http://superproducts.com/products/12345

In this example there is no explicit verb usage. Instead it is the HTTP verb GET that is used. In a RESTful web service it is the HTTP verbs (GET, POST, PUT, DELETE) that are used to interact with the resource.

GET – Used to return a resource in. It may be a simple product, as in the example above or it could be a collection of products.
POST – Used to create a new resource.
PUT – Used to update a resource. Sometimes PUT is also used to create a new resource is the resource is not present to update.
DELETE – Used to delete a resource.

The operations of a RESTful web service should be idempotent where appropriate to build resilience into the operations. It also makes sense that some verbs are not applicable to certain resources, depending on whether the resource is a collection of resources or a single resource (as we will see later).

REST is not a new idea – it was introduced by Roy Fielding in his 2000 doctoral dissataion, but has only recently been gaining serious traction against more widespread web service styles such as SOAP based XML. REST’s lightweight style and potential for smaller payloads have seen its popularity increase and major vendors have sought to add support for REST services into their products, including Microsoft.

A RESTful web service prescribes to the principles of REST, as well as specifying the MIME type of the data payload and the operations supported. REST does not specify the type of payload to be used and could really be any data format. In practice JSON has emerged as the defacto standard for RESTful services due to it’s lightweight nature.

Creating a WCF REST service

The WCF service will provide product information for some generic widgets.

The RESTful api for the service will support the following operations:

/Products

GET – Return a collection of products
POST – Add a new product to the product collection
PUT – Not supported, as we would want to edit the contents of the collection of products, not the collection itself.
DELETE – Not supported, as it would not really make sense to delete an entire collection of products

/Products/{productId} e.g. /products/12345

GET – Get the product represented by the product Id
POST – Not supported, as we cannot add anything at an individual product level
PUT – Update the product details.
DELETE – Delete the product

I am going to move failry quickly when it comes to creating the service. If you are not familiar with WCF (or need a refresher) I recommend that you have a look at the earlier series on creating WCF services.

I am going to make use of some of the new .Net 4.0 WCF features and not use a .svc file for the service. When creating RESTful services this has the advantage of making the URL look a little more ‘RESTful’, so instead of http://website/products.svc/1we can have http://website/products/1.

Lets create a new WCF application solution. I have thrown away the .svc file that the wizard has created for us. Rename the Interface to something sensible. I have called mine ProductService. Remember that this internal naming of the service contract and its operations in not exposed to the service once it is up and running. It makes sense to name the contract and operation in a sensible way for the project and them map it to a more REST friendly URL. We will see how to do this later in the post.

A RESTful service is in many respects similar to any other. It still follows the ABC of WCF. The service has an address at which the operations are invoked, it has a binding that specifies the way the service will communicate, and it also has a Contract that define the operations. The Contract for the ProductService looks like this:

[ServiceContract]
public interface IProductService
{
    [OperationContract]
    [WebInvoke(Method = "GET", UriTemplate = "", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
    List<Product> GetProducts();

    [OperationContract]
    [WebInvoke(Method = "POST", UriTemplate = "", RequestFormat = WebMessageFormat.Json)]
    void CreateProduct(Product product);

    [OperationContract]
    [WebInvoke(Method = "GET", UriTemplate = "/{ProductId}", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
    Product GetProductById(string productId);

    [OperationContract]
    [WebInvoke(Method = "PUT", UriTemplate = "/{ProductId}", RequestFormat = WebMessageFormat.Json)]
    Product UpdateProductDetails(string productId);

    [OperationContract]
    [WebInvoke(Method = "DELETE", UriTemplate = "/{ProductId}", RequestFormat = WebMessageFormat.Json)]
    Product DeleteProduct(string productId);
}

It is the WebInvoke attribute that identifies that a service operation is a available via REST. The method property defines the HTTP verb that is to be used. The UriTemplate property allows a templated URL to be specified for the operation. Parameters for the URL are specified between {}. The RequestFormat and ResponseFormat attributes specify the datatype for the exchange. We are using JSON.

As we are not using .svc files we must supply some routing so WCF knows how to route the requests to the service. Open up the Global.asax (or create one if the project doesn’t already have one) and locate the Application_Start method and add in a new route.

protected void Application_Start(object sender, EventArgs e)
{
    RouteTable.Routes.Add(new ServiceRoute("Products"new WebServiceHostFactory(), typeof(ProductService))); 
}

This route tells WCF that when a request comes in for Products it is routed to the ProductService. As an aside if you were using custom service host factory, for example if you were using an IOC container to to inject dependencies, you could specify it here instead of the standard WebServiceHostFactory.

It is the URI template and the routing that allows us to provide the RESTful web service addresses. Although we have a service operation called GetProductById that takes a Product Id, the resource is accessed via a URL in the form of /Products/12345.

Now the implementation is out of the way we are ready to go. The actual implementation of the service operations is not very interesting as it has no bearing in the REST service, as it is just the same as any implementation of a WCF service contract interface.

“But wait” I hear you all shout, “you haven’t configured the service in the web.Config. You are correct. In this example I am going to use another feature of .Net 4.0 WCF: Default Configuration. This allows WCF to automatically create  default configuration for Bindings, including endpoint, behaviors and protocol mappings without the need to get your hands dirty in the Web.Config. Of course in a production environment it is likely that you would want more control over your configuration and this can be added to the config as required.

Lets try the service

In order to test the service we need to call the service. A simple test harness is required. I created a simple html page using  jQuery to call out to the service:

<!DOCTYPE html>
<html>
<head>
<title>WCF RESTful Web Service Example</title>
</head>
<script type="text/javascript" src="http://code.jquery.com/jquery-1.7.2.min.js"></script>
    <script type="text/javascript">

        var GetProductRequest = function () {
            $.ajax({
                type: "GET",
                url: "http://localhost:52845/products/",
                contentType: "application/json",
                dataType: "text",
                success: function (response) {
                    $("#getProductsResponse").text(response);
                    alert(response);
                },
                error: function (error) {
                    console.log("ERROR:", error);
                },
                complete: function () {
                }
            });
        };

        GetProductRequest();

    </script>

    <body>
        <h1>
        WCF RESTful Web Service Example
        </h1>
        <h2>
        GET Response
        </h2>
        <pre id="getProductsResponse"></pre>
    </body>
</html>

This simply uses jQuery’s ajax functionality to make a call to the service and output the response on the page. Lets run it. As another aside, if you are using ASP.NET Development Server when building and testing your services (and the chances are you probably are) then you will only be able to test GET and PUT methods, as others are not allowed and will return a 405 error. If you want to be able to test the PUT and DELETE methods I would recommend using IIS Express as your development web server.

I am using Chrome 21 and there appears to be a problem. If we look at the developer tools in Chrome (press F12) we can see that a request was made with the OPTIONS method, and it returned a status of 405 Method not allowed.  Also there is a message about the Origin nor being allowed by something called Access-Control-Allow-Origin. This is due to the Same origin policy.

The Same origin policy

The Same origin policy is a browser security concept that  prevent a client-side Web application running from one site (origin) from obtaining data retrieved from another site. This pertains mainly to cross site script requests. For them to be considered to be from the same origin they must have the same domain name, application layer protocol and port number.

Althouth the same origin policy is an important security concept with a lot of valid uses it does not help out legitimate situation. We need to be able to access the RESTful web service from a different origin.

Cross-origin resource sharing

Cross-origin resource sharing is a way of working with the restrictions that the same origin policy applies. In brief, the browser can use a Prefligh request (this is the OPTIONS method request we have already seen) to find out whether a resource is willing to accept a cross origin request from a given origin. The requests and responses are validated to stop man in the middle type attacks. If everything is okay the original request can be performed. Of course there is more to it than that, but that is all we need to know for now. More infomation is available in the
W3C spec for CORS.

As usual with browsers, CORS support is not universal, nor is it implemented in consistently across all browsers. In some browsers it is all but ignored (I am looking at you IE 8/9) We will need to ensure that the CORS support we add to the WCF service works in all situation. ‘When can I use Cross-Origin Resource Sharing‘  is a handy tool to see if your browser suppports CORS.

Dealing with an OPTION Preflight request in WCF

There are a couple of ways of dealing with CORS in WCF.

The first is some custom WCF Behaviours that can be applied to a service operation via an attribute. This approach has the advantage that it is possible to explicitly target which operations you want to be able to accept the cross domain requests, however it has quite an involved implementation.  A good overview can be found here.

I am going to focus on the second method, which is to intercept the incoming requests and check for the presence of the OPTIONS method and responding appropriately. I will show the code and them we can talk about how it works. Back to the Global.asax and the Application_BeginRequest method.

protected void Application_BeginRequest(object sender, EventArgs e)
{
    HttpContext.Current.Response.AddHeader("Access-Control-Allow-Origin""*");
    if (HttpContext.Current.Request.HttpMethod == "OPTIONS")
    {
        HttpContext.Current.Response.AddHeader("Cache-Control""no-cache");
        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Methods""GET, POST, PUT, DELETE");
        HttpContext.Current.Response.AddHeader("Access-Control-Allow-Headers""Content-Type, Accept");
        HttpContext.Current.Response.AddHeader("Access-Control-Max-Age""1728000");
        HttpContext.Current.Response.End();
    }
}

As the method name suggests, this is called when a request is received. The first action is to add a Access-Control-Allow-Origin header to the response. This is used to specify URLs that can access the resource. If you are feeling generous and want any site (or origin) to access the resource you can use the * wildcard. The reason this header is outside of the check for the OPTIONS method is because sometimes CORS request do not send OPTIONS Prefligh requests for ‘simple’ requests (GET and POST). In this case they just send their origin and expect a response to be told if they can access the resource. This is dependent upon Browser CORS implementation. Remember when we first tried to call the service and we got an error saying that the Origin was not allowed by Access-control-Allow-Origin? That was because the response header did not contain this header with the correct entry.

The Cache-Control tell the requester not to cache the resource. The Access-Control-Allow-Methods specifies which http methods are allowed when accessing the resource. Access-Control-Allow-Headers specifies which headers can be used to make a request. Access-Control-Max-Age specifies how long the results of the Preflight response can be cached in the Preflight result cache.

There is a lot more information available about the CORS implementation. If you are looking for more information two of the best places are the W3C CORS Specification and the Mozilla Developer Network.

Lets give it another try.

Success. That’s all there is to it. Hopefully you will see that creating RESTful web services with WCF is easy. If you have any questions please feel free to leave a comment.

Filtering a CAML Query by Target Audience


This post is not really my usual area of expertise, but hopefully it might be useful to someone who is trying to do something along the same lines.

Recently I had the opportunity to work on a Silverlight control to be displayed on a SharePoint Intranet site. The control was pulling data from a SharePoint list via a Collaborative Application Markup Language (CAML) query and then filtering the results to select only the items with the required Target Audiences. This approach had some performance problems as it was necessary to return a large number of results to ensure that the required number of filtered results was contained within the result set.

So, for example, it would have to return 100 results to filter it down to 10 results for the required target audience. Needless to say this is a not a great idea. A better approach would be filter the list on the server as part of the CAML query and then return only the results we were interested in.

So, just how would you go about adding some audience targeting filters to a CAML query, where there could be one or more possible target audiences?

The query would effectively become (in SQL like syntax):

SELECT * FROM LIST WHERE TargetAudiences IN (AudienceList)

However, SharePoint lists don’t really allow this type of query. For an item in a list the audiences are a string of audience Guids, separated by either commas or semi-colons (this article explains why this is the case) so the IN bit cannot be used. The query becomes something like

SELECT * FROM LIST WHERE TargetAudiences LIKE %AudienceGuid1% OR TargetAudiences LIKE %AudienceGuid2% OR TargetAudiences LIKE %AudienceGuid3% ... etc

CAML does not have a LIKE, but it does have a Contains which is close enough. So all I have to do is build up the a query string that is valid, and do it such a way that it can work for 1 audience, for 20 audiences of however many.

I am just going to come out and say it. The CAML logical join (AND / OR) operators are weird. The operators have to be applied in pairs (maybe in homage to Noah’s Ark?)

A single OR with 2 conditions:

<Or>
    Condition1
    Condition2
</Or>

3 conditions:

<Or>
    <Or>
        Condition1
        Condition2
    </Or>
    Condition3			
</Or>

This can lead to some spectacularly ugly nested conditions, that are both hard to write and hard to read. You know when a languages syntax is bad when there are lots of query building and converting tools available to give you a fighting chance to use it correctly.

I needed to generate a query that could have an arbitrary number of Contains statements OR’d together from a list of Audience guids provided in a List (it doesn’t really matter where the List came from).

A contains query looks like this:

<Contains>
    <FieldRef Name='Target_x0020_Audiences'/>
        <Value Type='TargetTo'>11111111-1111-1111-1111-111111111111</Value>
</Contains>

A contains query like this is not ideal, as its performance is poor due to the string matching it has to do. I am not sure that there is any other way to do this that would perform better. If there is let me know in the comments, I would love to know.

If we wanted to just look for a single audience, the query above would be fine, but I needed to look for lots of audiences and get the results that matched at least one of them. To create the query, a small utility class was in order. The basic idea was to create the Contains syntax for each Guid and push it onto a queue, then take the top 2 items and OR them together before pushing the result back on the queue. Repeat this until there was a single item in the queue, which is the query that we want.

So, if A, B C etc. represent a Contains condition, and the () represent the OR tags, the process for five audiences looks like this:

----> Head
E D C B A --> (AB) E D C --> (CD) (AB) E --> ((AB)E) (CD) --> ((CD)((AB)E))

The code from the utility class:

public static string CreateTargetAudienceCriteria(List<string> audienceList)
{
    var audienceQueue = new Queue<string>();

    audienceList.ForEach(audience => audienceQueue.Enqueue(BuildContains(audience)));

    while (audienceQueue.Count > 1)
    {
        audienceQueue.Enqueue(String.Format("{0}{1}{2}{3}""<Or>\r\n", audienceQueue.Dequeue(), audienceQueue.Dequeue(), "</Or>\r\n"));
    }

    return audienceQueue.Dequeue();
}

private static string BuildContains(string audience)
{
    return String.Format("<Contains>\r\n<FieldRef Name='Target_x0020_Audiences'/><Value Type='TargetTo'>{0}</Value>\r\n</Contains>\r\n", audience);
}

And it would be simply used within a CAML query as shown below. It is likely that your query would include more in the Where clause:

var camlQuery = new CamlQuery
{
	ViewXml = string.Format(
		@"<View>  
			<Query> 
				<Where>
					{0}                        
				 </Where>
			</Query> 
		</View>",
	TargetAudience.CreateTargetAudienceCriteria(audienceList))
};