Generating SpecFlow reports for your TeamCity builds

You may or may not be aware that SpecFlow has the ability to produce some reports giving information about the features and steps after a test run has finished. The test execution report contains details about the SpecFlow test features, scenarios and steps. The step definition report shows the steps and their statuses, including unused steps and steps with no implementation.

The SpecFlow documentation details how the reports can be generated using SpecFlow.exe at the command line. Whilst it is useful to be able to manually generate these reports, it would be better if we could generate them as part of the TeamCity build process and make the reports available from within the build details of TeamCity. This makes the reports more accessible to all team members and especially useful when showing your product owner the scenarios covered by your acceptance test suite.

Lets run through getting things up and running.

1. Running the tests using nunit-console.exe

In order to generate the SpecFlow reports you must first run the tests using nunit-console.exe in order to generate the correct output files to feed into the report generation. The SpecFlow documentation tells us this is the command we need to run.

nunit-console.exe /labels /out=TestResult.txt /xml=TestResult.xml bin\Debug\BookShop.AcceptanceTests.dll

As we need to run nunit-console.exe it is not possible to use TeamCity’s NUnit runner. There are a few different runner types we could run this command such as Command line or Powershell, but the one that best fits our needs is probably MsBuild, but as always the same results can be achieved in lots of different ways.

In Visual Studio create a build file. I have given mine the imaginative name of nunit.build

<?xml version="1.0" encoding="utf-8" ?>

<Project ToolsVersion="3.5" DefaultTarget="Compile"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

<PropertyGroup>
    <NUnitHome>C:/Program Files (x86)/NUnit 2.6.2</NUnitHome>
    <NUnitConsole>&quot;$(NUnitHome)\bin\nunit-console.exe&quot;</NUnitConsole>
    <testResultsTxt>&quot;$(teamcity_build_checkoutDir)/TestResult.txt&quot;</testResultsTxt>
    <testResultsXml>&quot;$(teamcity_build_checkoutDir)/TestResult.xml&quot;</testResultsXml>
    <projectFile>&quot;$(teamcity_build_checkoutDir)\CrazyEvents.AcceptanceTests\CrazyEvents.AcceptanceTests.csproj&quot;</projectFile>
</PropertyGroup>
 
<Target Name="RunTests">
  <Exec Command="$(NUnitConsole) /labels /out=$(testResultsTxt) /xml=$(testResultsXml) $(projectFile)"/>
</Target>  
    
</Project>

I am not an MSBuild expert and it took me a while to get the paths correctly escaped.  If you are getting build errors check the paths first. TeamCity has some properties that you can reference in your build files to give you access the build related things. I am using teamcity_build_checkoutDir to specify that the output from the build is placed in the root of the checkout directory.

In TeamCity your build step should look something like this:

SpecflowTeamCityNUit1

With the build file in place and the step configured you may want to give things a run to make sure everything is working as expected. When you do you will notice that there is something missing from TeamCity.

SpecflowTeamCityNUnit2

If you had used the built in NUnit test runner you would expect to see a Test tab in TeamCity with the details of the tests run as part of the build. The tab is missing because TeamCity doesn’t know about the tests been run by the MsBuild task. The next step will show how to get the test tab and results back.

2. Getting the test tab to show in TeamCity

Luckily for us the good people at TeamCity have created an NUnit addin that does just that. To use the addin in needs to be present in the NUnit addin folder. The best way to ensure the addin is present in to add a Copy task to the MSBuild target. The full build file now looks like this:

<?xml version="1.0" encoding="utf-8" ?>

<Project ToolsVersion="3.5" DefaultTarget="Compile"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <ItemGroup>
        <NUnitAddinFiles Include="$(teamcity_dotnet_nunitaddin)-2.6.2.*" />
    </ItemGroup>

    <PropertyGroup>
        <NUnitHome>C:/Program Files (x86)/NUnit 2.6.2</NUnitHome>
        <NUnitConsole>&quot;$(NUnitHome)\bin\nunit-console.exe&quot;</NUnitConsole>
        <testResultsTxt>&quot;$(teamcity_build_checkoutDir)/TestResult.txt&quot;</testResultsTxt>
        <testResultsXml>&quot;$(teamcity_build_checkoutDir)/TestResult.xml&quot;</testResultsXml>
        <projectFile>&quot;$(teamcity_build_checkoutDir)\CrazyEvents.AcceptanceTests\CrazyEvents.AcceptanceTests.csproj&quot;</projectFile>
    </PropertyGroup>

    <Target Name="RunTests">
        <MakeDir Directories="$(NUnitHome)/bin/addins" />
        <Copy SourceFiles="@(NUnitAddinFiles)" DestinationFolder="$(NUnitHome)/bin/addins" />
        <Exec Command="$(NUnitConsole) /labels /out=$(testResultsTxt) /xml=$(testResultsXml) $(projectFile)"/>        
    </Target>

</Project>

An addins directory is created and the two addin files are copied from TeamCity to the directory. It is important to specify the correct version of the addin files in the NUnitAddinFiles section for the version on NUnit you are using. With the build file checked in the next build will have the missing Test tab, and the test summary will be displayed on the run overview.

SpecflowTeamCityNUnit3

3. Running the SpecFlow reports

To run the SpecFlow report I have used another build task to make two calls to SpecFlow.exe with the files generated as output by NUnit. The final build file looks like this:

<?xml version="1.0" encoding="utf-8" ?>

<Project ToolsVersion="3.5" DefaultTarget="Compile"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <ItemGroup>
        <NUnitAddinFiles Include="$(teamcity_dotnet_nunitaddin)-2.6.2.*" />
    </ItemGroup>

    <PropertyGroup>
        <NUnitHome>C:/Program Files (x86)/NUnit 2.6.2</NUnitHome>
        <NUnitConsole>&quot;$(NUnitHome)\bin\nunit-console.exe&quot;</NUnitConsole>
        <testResultsTxt>&quot;$(teamcity_build_checkoutDir)/TestResult.txt&quot;</testResultsTxt>
        <testResultsXml>&quot;$(teamcity_build_checkoutDir)/TestResult.xml&quot;</testResultsXml>
        <projectFile>&quot;$(teamcity_build_checkoutDir)\CrazyEvents.AcceptanceTests\CrazyEvents.AcceptanceTests.csproj&quot;</projectFile>
        <SpecflowExe>&quot;C:\Users\Admin\Documents\Visual Studio 2010\Projects\CrazyEventsNew\packages\SpecFlow.1.9.0\tools\specflow.exe&quot;</SpecflowExe>
    </PropertyGroup>

    <Target Name="RunTests">
        <MakeDir Directories="$(NUnitHome)/bin/addins" />
        <Copy SourceFiles="@(NUnitAddinFiles)" DestinationFolder="$(NUnitHome)/bin/addins" />
        <Exec Command="$(NUnitConsole) /labels /out=$(testResultsTxt) /xml=$(testResultsXml) $(projectFile)"/>        
    </Target>

    <Target Name="SpecflowReports">
        <Exec Command="$(SpecflowExe) nunitexecutionreport $(projectFile) /xmlTestResult:$(testResultsXml) /testOutput:$(testResultsTxt) /out:&quot;$(teamcity_build_checkoutDir)/SpecFlowExecutionReport.html&quot;"/>
        <Exec Command="$(SpecflowExe) stepdefinitionreport $(projectFile) /out:&quot;$(teamcity_build_checkoutDir)/SpecFlowStepDefinitionReport.html&quot;"/>
    </Target>

</Project>

Now we just need to get the html output of the reports to show in TeamCity.

4. Displaying the SpecFlow reports as tabs in the build output

Again, the people at TeamCity have made this really easy. First it is necessary to set the required files as build artifacts in the build configuration.

SpecflowTeamCityNUnit4

The final part of the jigsaw is to specify the report html file artifacts to be used as report tabs. Head over to Report tabs section of the Administration page and specify the new report tabs.

SpecflowTeamCityNUnit5

Now your tabs will be visible in TeamCity.

SpecflowTeamCityNUnit6

Getting started with automated acceptance tests in an Agile project

Automated testing can bring great benefits to an Agile project but it is not always clear how to best integrate the processes and practices effectively. In this post I will outline a few ideas that might help, along with some pitfalls, based on experience gained from my own projects.

What is automated acceptance testing?

I am using the term automated acceptance tests to mean tests that exercise a system from it’s most external interface. For the purposes of this post I will be mainly talking about tests that exercise the user interface but it could also mean tests that exercise an externally visible API such as a web service. The important thing about the tests is that they test the expected behaviour of the system, ensuring that the desired functionality is met. This builds an executable specification for the system.

What does it bring to an Agile project?

In this post I am going to use terms that are from Scrum in particular, but the points covered are applicable to any Agile process (and more than likely non Agile processes as well).

Agile projects are focused on meeting the needs of their customers (Product Owner in Scrum terms) and one of the ways this is done it to use acceptance criteria to define the expected behaviour of a particular user story. It is very valuable to be able to prove that that the acceptance criteria have been satisfied, whilst building a very comprehensive suite of automated regression tests. A suite of automated acceptance tests gives incredible confidence when building the system, especially in Agile developments where there may be refactoring and changes of direction as the changing needs of the Product Owner are met.

Where does automated testing fit into the process?

One of the most important things I have learnt from using automated acceptance testing on an Agile project is that it has to be a first class part of the development process. Depending on the maturity of your team this may mean having an explicit step or task to ensure it gets done, or having it as part of your definition of done. Either way it needs to be given the same level of importance as the other practices. A story cannot be done without automated tests that at least cover the acceptance criteria of the story. It is almost certain that when time is tight and people are under pressure that the automated tests will be the first things to be skipped. This is almost always a mistake, and one which I have learnt the hard way. Missing acceptance tests will come back to haunt you. Include them in your process and account for them in your planning activities.

Who writes the automated tests?

The creation of automated acceptance tests requires input from all the members of an Agile team. The roles and make up  in your Agile team may differ, but typically acceptance criteria will be generated with the input of the product owner and analyst (and maybe also the tester in some teams) who will also create specific scenarios that will be automated. As the tests are automated there is obviously some development work in order to perform the automation. There are ways to gain a lot of reuse form a test suite and to reduce the amount of additional development required to extent the suite. I will talk about some of these when looking at some specific tools and technologies in a later section.

What does it mean to ‘traditional’ testing for the project?

In recent years I have become more convinced that it is no longer possible to speak about the traditional silo’d roles in software development. The roles of analyst, developer and tester continue to become blurred as roles change and responsibilities overlap. The creation of automated acceptance tests require a little bit of all these traditional disciplines. Who will be involved in your team depends on the skills and abilities of the team members. To make automated testing work in your team it will be necessary to let go of the idea that developers don’t do testing, or that testers don’t write code.

That said, automated acceptance tests are not meant as a replacement to traditional exploratory testing or to undermine the role of traditional testing in any way. Instead they should be seen as complimentary.

This is a great post that discusses the aims of automated and exploratory testing:

Automated vs. exploratory – we got it wrong!

Automated acceptance testing as part of the build process

To get the most out of automated acceptance tests they need to be part of your Continuous Integration build. Acceptance tests are never going to be fast running, and this can pose some challenges to getting the best feedback from your tests. As an indication, my current project has about 300 Selenium based tests and takes about an hour to run. We are 7 sprints in and the number of tests is going to get much higher. At present we have a fast CI build running only the unit tests to give instant feedback, followed by an integration build containing all the acceptance tests that runs after a successful CI build. We are finding that the feedback is a bit slow and are planning to introduce the concept of a ‘current’ integration build that will run a subset of the acceptance tests that have been tagged as being in the area which fast feedback is required (i.e. area in which work is being undertaken). The full set of tests will continue to run at their slower cadence.

Using the right tools

My tools of choice are SpecFlow and Selenium Web Driver, but there are others that perform a similar function. It is important to understand the tools and the best way to use them. When talking about automated acceptance tests, and specifically using Selenium to drive the browser, I often encounter people who have tried to do automation and then have stopped as the tests become brittle and the cost of maintenance becomes too high.  Here are a few tips from my experience of using automated acceptance tests on a large project:

  • Aim to get a lot of reuse from your SpecFlow scenarios. Use the features built into SpecFlow, such as the Scenario.Context.Current and it’s DI capabilities, to make steps that can be reused across many features. If done correctly you will find that it is very easy to add new tests without writing any new steps at all.
  • Us the page object pattern to encapsulate the logic for the web pages. This is the one of the single biggest factors to ensuring your test suite is maintainable. Without page object you will find yourself fixing lots of tests due to a small change to a single page.
  • Thing about the wider ecosystem. We are using Selenium Server to give us the ability to run tests across multiple browsers.
  • Learn about you tools to get the best from them. Writing the automation will be a lot more pleasurable of you understand how things are working.

The posts below discuss how to make a start writing maintainable automated acceptance tests:

Behavioural testing in .Net with SpecFlow and Selenium (Part 1)

Behavioural testing in .Net with SpecFlow and Selenium (Part 2)

Managing SpecFlow acceptance test data with Entity Framework

Summary

  • Automated acceptance tests are about building an executable specification for your system.
  • The executable specification is very useful to to prove the acceptance criteria have been met and to give confidence that changed have not broken functionality.
  • Make automated tests part of your process. Do not skimp on them, even if you are under pressure to get things done. Add them to your definition of done.
  • Creating the tests is an activity for all the team, although traditional roles and responsibilities may be blurred.
  • Automated tests are in addition to traditional exploratory style testing and are not meant as a replacement.
  • Get the most from your tests by choosing a sensible CI strategy to give the best feedback
  • Choose the right tools for the job, and spend time learning to get the best from them
  • Become an advocate of automated testing in your organisation

Managing SpecFlow acceptance test data with Entity Framework

A look into how Entity Framework can be used to simplify the management of data for your acceptance test scenarios.

In a previous set of posts I looked at how it is possible to use SpecFlow and Selenium to create automated acceptance tests driven via the browser.

An important part of any acceptance or integration test is the test data. It is important that the data for each test is isolated from any other test. Often these tests require complicated data setup which can add additional complication and maintenance to to the test suite. This post assumes that the system under test is using a SQL database.

Traditional ways to manage test data

There are a few ways to manage the test data for acceptance tests. Probably the most widely used are:

  • Run some SQL scripts as part of the test setup and tear-down – Can be effective but means extra maintenance of the scripts is required. This type of script can be prone to errors as they are built outside of the type safety that the rest of the system benefits from.  This approach also requires a mechanism to run the scripts when the tests run.
  • Use a standard set of data for all tests – the main issue with this approach is that it may not allow for proper isolation of the test data. It is possible to use transactions or some similar mechanism to allow for this, but sometimes it is only the order of the running tests that ensures the correct data is available for the tests.

There are of course other ways, and many variations on the theme.

A better approach..?

It would be better if it were possible to add the data as part of the test setup in a type-safe way, using the language of the system without the need for external scripts.

If you are using an ORM in your application, it is very likely that you already have the tools needed to do just this. In this post I am going to use Entity Framework 5, but you could just as easily use nHibernate or something else to get the same result.

An example

Here is the SpecFlow scenario we are going to automate.

@eventSearch
Scenario: Search for events in a region and display the results
	Given that the following regions exist
	| Id | Name   |
	| 1  | North  |
	| 2  | South  |
	| 3  | London |
	And the following events exist
	| Code    | Name                              | Description                                   | Region Id |
	| CR/1001 | Crochet for Beginners             | A gentle introduction into the art of crochet | 1         |
	| CH/3001 | Cat Herding                       | A starter session for the uninitiated         | 3         |
	| CH/3002 | Cat Herding - Advanced techniques | Taking it to the next level                   | 3         |
	When I navigate to the event search page
	And I select the "London" region 
	And perform a search
	Then The search results page is displayed
	And the following results are displayed
	| Code    | Name                              | Description                           | Region |
	| CH/3001 | Cat Herding                       | A starter session for the uninitiated | North  |
	| CH/3002 | Cat Herding - Advanced techniques | Taking it to the next level           | North  |

It is only the first two steps of the scenario that we are interested in for this post. Even before we start satisfying the first step we should make sure that the database is in the correct state to start adding the data for the test.

To do this we can use a BeforeScenario step. As the name suggests, this step is called before each scenario is run. You will notice that our scenario had a tag of @eventSearch. The same tag will be used by the BeforeScenario attribute to ensure that the BeforeScenario method is only triggered by scenarios that have the @eventSearch tag. This becomes important when you have multiple feature files in the same project.

[BeforeScenario("@eventSearch")]
public void Setup()
{
    using (var context = new CrazyEventsContext())
    {
        context.Database.ExecuteSqlCommand("DELETE FROM Events DBCC CHECKIDENT ('CrazyEvents.dbo.Events',RESEED, 0)");
        context.Database.ExecuteSqlCommand("DELETE FROM Regions DBCC CHECKIDENT ('CrazyEvents.dbo.Events',RESEED, 0)");
        context.SaveChanges();
    }
}

We are using the ExecuteSqlCommand method to clear the contents of the tables and reseed the identity columns of the tables. If the tables are not used as part of a foreign key relationship it is possible to use TRUNCATE TABLE.

The next step is to use Entity Framework to populate the region and event details from the SpecFlow table. Although I have these steps as part of a scenario, you could also use a Background step to have the data populated at the start of each scenario.

[Given(@"that the following regions exist")]
public void GivenThatTheFollowingRegionsExist(Table table)
{
    var regions = table.CreateSet<Region>();
 
    using (var context = new CrazyEventsContext())
    {
        foreach (var region in regions)
        {
            context.Regions.Add(region);
        }
                
        context.SaveChanges();
    }
}
 
[Given(@"the following events exist")]
public void GivenTheFollowingEventsExist(Table table)
{
    var events = table.CreateSet<Event>();
 
    using (var context = new CrazyEventsContext())
    {
        foreach (var crazyEvent in events)
        {
            context.Events.Add(crazyEvent);
        }
 
        context.SaveChanges();
    }
}

SpecFlow has some table helper extension methods. I am using table.CreateSet(T) to create an IEnumerable of the specified type T from the data in the table. For this to work your table column names need to be the same as the properties of your entity (you can use spaces for readability if you like).

After I have the list of entities, I simply use the Entity Framework context to add each entity to the context in turn. Remember to call SaveChanges() to perform the update.

And that is all there is to it. The nice thing about this method of managing test data is that the data required for the test is specified as part of the test itself. Everyone can see what data is expected to be present when the test is run.

The easy way to run Selenium and SpecFlow tests in a TeamCity build with IIS Express

The series in full:

In Behavioural testing in .Net Part 1 and Part 2 I looked at creating behavioural acceptance tests with SpecFlow and Selenium. At some point after you have created your tests you will probably want to run them as part of a Continuous Integration build, or a nightly build or similar on a CI server such as TeamCity.

As the tests rely on browser interaction they obviously need the website to be running on the build server in order for the tests to function. There are several ways to get a web application up and running on the build server as part of the build, Web Deploy is probably one of the best.

However there are times you might want to get things up and running quickly as part of a spike or to just try something out. I am not suggesting that this is necessary enterprise strength but it may well be enough depending on your requirements.

This is not a TeamCity tutorial, so I am going to focus on the build steps required.

Step 1 – Build the project

This is fairly obvious. You cannot do much without a successful build.

Step 2 – Start the web application with IIS Express

IIS Express is a lightweight web server that is generally used for development purposes. It can be used in various ways and with different types of web application. For more information on how it can be used see this page. The interesting bit is that IIS Express can launch a web application directly from a folder from the command promp, and can be used for various types of application including ASP.Net, WCF and even PHP.

A command line runner is in order.

The text for the custom script is

start C:\"Program Files (x86)"\"IIS Express"\iisexpress.exe /path:"%teamcity.build.checkoutDir%\CrazyEvents" /port:2418

This tells TeamCity to start a new command window and run the IIS Express executable. The /path: parameter tells IIS Express which folder contains the web application. I have used a TeamCity parameter to specify the checkout directory. You need to make sure this points to the directory of your web application. If it is not correct the build log will let you know exactly where it is pointing and you can adjust it as required.

The /port: parameter specifies the port that the website will run on. This needs to be the same as the port specified in your tests.

Just a quick not about the script. TeamCity has some peculiar behaviour regarding double quotes in the command strings. It took a little while for me to get it right. If you have problems but it looks correct just double check the quote placement.

As another side note, you will not be able to see the command window when the step runs, but it will be running and you should be able to navigate to the website on the build sever.

Step 3 – Running the tests

Run the tests in the usual way with the correct build runner.

Step 4 – Stop IIS Express

The final step is to stop IIS Express. We will use another command line runner.

The custom script is:

taskkill /IM IISExpress.exe /F

which forces the IISExpress instance to close. Once again, you cannot see this in action but it is working in the background.

And that is all there is to it, quick and easy. Behavioural testing doesn’t have to mean a painful setup and teardown experience.

Behavioural testing in .Net with SpecFlow and Selenium (Part 2)

In this series of posts I am going to run through using Selenium and SpecFlow to test .Net applications via the user interface. I will talk about the situations this type if testing can be used, and how we can create effective, non-brittle test with the help of the Page Objects Pattern.

The series in full:

In the first post in this series I looked at creating a scenario with SpecFlow and generating some empty step definitions. In this post I will implement the steps using Selenium in order to make the scenario test pass.

Selenium

Selenium is a web automation framework. In simple terms it allows you to drive a browser via code. Selenium can be used in many browser automation situations, but one of  it’s main uses is for automated testing of web applications.

When it comes to testing the behaviours of web pages there has traditionally been a problem where the testing code becomes tied to the implementation details of the page. This can make the tests brittle as a change to the page implementation details, such as a change to a field name or Id, will break the test even if the behaviour of the page has not changed.

In order to protect against this it is desireable to encapsulate the implementation details of a page under test in a single place, so that if there is an implementation change there is only a single place to change, instead of having to fix all the separate bits which rely on those details.

Separating the behaviour of a page from its implementation details is the purpose of the Page Object pattern. A page object is simply a class that encapsulates the implementation details for a web page, allowing the automation to focus only on behaviour.

Implementing the first scenario

The first test is all about the Search Page so it is the Search Page page object that is first to be created. I find it easiest to work in a TDD style workflow, using the SpecFlow steps to guide me towards the implementation one step at a time. Lets have a look at the implementation:

public class CrazyEventsSearchPage
{
    [FindsBy(How = How.Id, Using = "region")]
    private IWebElement regionDropdown;

    [FindsBy(How = How.Name, Using = "Submit")]
    private IWebElement submitButton;

    private static IWebDriver driver;

    public static CrazyEventsSearchPage NavigateTo(IWebDriver webDriver)
    {
        driver = webDriver;
        driver.Navigate().GoToUrl("http://localhost:2418/event/Search");
        var searchPage = new CrazyEventsSearchPage();
        PageFactory.InitElements(driver, searchPage);
        return searchPage;
    }

    public void SelectRegionDropdown()
    {
        regionDropdown.Click();
    }

    public IList<string> GetAllPossibleRegions()
    {
        var selectList = new SelectElement(regionDropdown);
        var options = selectList.Options;

        return options.Select(webElement => webElement.Text).ToList();
    }
}

Remember that the job of a page object is to encapsulate the implementation details of a web page. Selenium provided some helper for working with page objects. To use them make sure you have installed the Selenium Webdriver Support Classes into your test project via NuGet. PageFactory.InitElements(driver, searchPage) take a page object, in this case searchPage and ensures that all the elements are populated with the details from the web page. In this case it will populate the regionDropdown and submitButton web elements ready to be used.

It is the methods and fields of the page object that encapsulate the implementation details. If the id of the region needs to be changed, only the WebDriver FindsBy attribute that would need to be changed. If submitting the form needed to do something else, only a single change would be required.

The NavigateTo method is used as the entry point into the web site. The other methods are used by the SpecFlow steps.

[Binding]
public class EventSearchSteps
{
    private CrazyEventsSearchPage searchPage;
    private CrazyEventsSearchResultsPage searchResultsPage;
    private IWebDriver driver;

    [BeforeScenario()]
    public void Setup()
    {
        driver = new FirefoxDriver();
    }

    [AfterScenario()]
    public void TearDown()
    {
        driver.Quit();
    }

    [Given(@"that I want to search for an event by region")]
    public void GivenThatIWantToSearchForAnEventByRegion()
    {
        searchPage = CrazyEventsSearchPage.NavigateTo(driver);
    }

    [When(@"a list of possible regions is presented")]
    public void WhenAListOfPossibleRegionsIsPresented()
    {
        searchPage.SelectRegionDropdown();
    }

    [Then(@"the list should contain ""(.*)"", ""(.*)"", ""(.*)"" and ""(.*)""")]
    public void ThenTheListShouldContainAnd(string p0, string p1, string p2, string p3)
    {
        var regions = searchPage.GetAllPossibleRegions();
        Assert.IsTrue(regions.Contains(p0));
        Assert.IsTrue(regions.Contains(p1));
        Assert.IsTrue(regions.Contains(p1));
        Assert.IsTrue(regions.Contains(p1));
    }

    [Then(@"""(.*)"" should be the default value")]
    public void ThenShouldBeTheDefaultValue(string p0)
    {
        var selectedRegion = searchPage.GetSelectedRegion();
        Assert.IsTrue(selectedRegion.Contains(p0));
    }
}

I have added some setup and teardown methods to ensure that the web driver is initialised and shut down correctly. With all the implementation in place the scenario now passes.

Another scenario

Its time to add another scenario:

Scenario: Search for events in a region and display the results
	Given that I want to search for an event by region
	When I select the "London" region
	And perform a search
	Then The search results page is displayed
	And the following results are displayed
	| Event Code | Event Name                        | Region | Description                           |
	| CH/3001    | Cat Herding                       | London | A starter session for the uninitiated |
	| CH/302     | Cat Herding - Advanced techniques | London | Taking it to the next level           |

This scenario uses a SpecFlow table to define the expected results. Gherkin has a lot of powerful features to make the most of defining scenarios. The steps definitions were generated in the same way as the first scenario. I created another page object for the search results:

public class CrazyEventsSearchResultsPage
{
    private static IWebDriver driver;

    [FindsBy(How = How.Id, Using = "searchResults")] private IWebElement searchResults;

    public CrazyEventsSearchResultsPage(IWebDriver webDriver)
    {
        driver = webDriver;
    }

    public bool IsCurrentPage()
    {
        return driver.Title == "Crazy Events - Search Results";
    }

    public List<List<string>> GetResults()
    {
        var results = new List<List<string>>();
        var allRows = searchResults.FindElements(By.TagName("tr"));
        foreach (var row in allRows)
        {
            var cells = row.FindElements(By.TagName("td"));
            var result = cells.Select(cell => cell.Text).ToList();
            results.Add(result);
        }

        return results;
    }
}

This page object doesn’t have a direct entry point, as this page cannot be reached directly without a search being performed. Instead it is reached from the search page object. I have added some extra functionality for performing the search and initialising the search results page object. We will see these in a moment.

And the step definitions for the new steps. Remember that any identical steps share an implementation.

[When(@"I select the ""(.*)"" region")]
public void WhenISelectTheRegion(string region)
{
    searchPage.SelectRegion(region);
}

[When(@"perform a search")]
public void WhenPerformASearch()
{
    searchResultsPage = searchPage.SubmitSearch();
}

[Then(@"The search results page is displayed")]
public void ThenTheSearchResultsPageIsDisplayed()
{
    Assert.IsTrue(searchResultsPage.IsCurrentPage());
}

[Then(@"the following results are displayed")]
public void ThenTheFollowingResultsAreDisplayed(Table table)
{
    var results = searchResultsPage.GetResults();
    var i = 1; //ignore the header

    foreach (var row in table.Rows)
    {
        Assert.IsTrue(results[i].Contains(row["Event Code"]));
        Assert.IsTrue(results[i].Contains(row["Event Name"]));
        Assert.IsTrue(results[i].Contains(row["Region"]));
        Assert.IsTrue(results[i].Contains(row["Description"]));
        i++;
    }
}

The additional methods for the search page object are:

public string GetSelectedRegion()
        {
            var regionElement = new SelectElement(regionDropdown);
            return regionElement.SelectedOption.Text;
        }

        public void SelectRegion(string region)
        {
            var regionElement = new SelectElement(regionDropdown);
            regionElement.SelectByText(region);
        }

        public CrazyEventsSearchResultsPage SubmitSearch()
        {
            submitButton.Click();

            var searchResultsPage = new CrazyEventsSearchResultsPage(driver);
            PageFactory.InitElements(driver, searchResultsPage);

            return searchResultsPage;
        }

The SubmitSearch() method initialises the search results page object ready to be used in the scenario steps.

In conclusion…

Page objects are a powerful way to overcome the traditional problems with automated browser tests. When Selenium is coupled with SpecFlow they allow for a TDD style workflow that can give true outside-in development.

Another useful side effect of using SpecFlow scenarios is that the steps create an API for the application. Once the steps have been created it is possible to create new scenarios without any coding at all, which means that it is accessible to members of the team who might have otherwise found implementing tests a bit too developer centric.

Even the implementation of the page objects is not generally too complicated once the pattern has been established.  I appreciate that there is a bit more of a learning curve but I think this type of testing offers far more that the record and playback type behavioural tests and would be a useful skill for all team members to have.

Finally, as with all code, do not hesitate to refactor the scenarios, steps and page objects. There are lots of things that could be refactored in this simple example. A couple of the obvious things are the hard coded URL and logic that is used to get the data from the search results page. I will leave these as an exercise for the reader.

The series in full:

Behavioural testing in .Net with SpecFlow and Selenium (Part 1)

In this series of posts I am going to run through using Selenium and SpecFlow to test .Net applications via the user interface. I will talk about the situations this type if testing can be used, and how we can create effective, non-brittle test with the help of the Page Objects Pattern.

The series in full:

I have consciously avoided using the term Behaviour Driven Development (BDD) in the title as this post isn’t really about BDD as such. It is about using some tools in a way that could be used in many situations, with BDD being just one of them. I want to discuss how to use the BDD Framework SpecFlow and the web automation framework Selenium to create tests that test an application via it’s web user interface.

When to use this approach

You could use this approach in several situations. In any situation the idea is to create an executable specification and create automated testing for your code base. The two most common situations are:

  1. Automated acceptance tests – lets call this approach ‘test afterwards’. There is often a desire to create acceptance tests (or integration tests or smoke tests or whatever…) for an application. Using SpecFlow and Selenium you can create tests that test the application as a user would actually interact with it. A suite of acceptance tests is a worthwhile addition to any application.
  2. Behaviour Driven Development style – This is the ‘test before’ approach. In Test Driven Development with MVC I talked about how a true BDD style would ideally test user interaction with the application. These tools allow you to do just that. The workflow would be very similar to the outside-in approach I describe in the post, only you would create scenarios in SpecFlow instead of writing unit tests. Of course there are tons of approaches when it comes to BDD, this being  just one of them. Liz Keogh has a useful summary and introduction to BDD that is worth a look if you want to know more about it.

In this post I am going to use a test afterwards approach so that I can focus on the tooling.

Installing SpecFlow and Selenium

I have created a new folder to house my acceptance tests. Installing Selenium is easy, simply grab the latest version from NuGet (you are using NuGet, right?). Make sure to select Selenium WebDriver. For SpecFlow grab the latest version from NuGet and then get the Visual Studio integration from the Visual Studio Gallery. This will add some extra VS tooling we will use later.

The first scenario

I am going to revisit the event search scenario’s from Test Driven Development with MVC and expand on them a little. The first thing is to add a SpecFlow feature file to the project. Right click on the project and select New item.. Add a new SpecFlow feature file. I have called mine EventSearch.feature. The feature file has feature,which is essentially a user story, and a number of scenarios in the Given When Them format.

The feature files are written in the Gherkin language, which is intended to be a human readable DSL for describing business behaviour.

Lets start populating the feature file with the first scenario.

Feature: EventSearch
	In order to find an event to participate in
	As a potential event delegate
	I want to be able to search for an event

Scenario: Event Search Criteria should be displayed
	Given that I want to search for an event by region
	When I select to choose a region
	Then a list of possible regions is presented
	And the list should contain "All", "North", "South" and "London"
	And "All" should be the default value

The next step is to create the bindings that allows us to wire up the Gherkin feature to our application’s code. Before we do that we just need to tell SpecFlow to use Visual Studio’s testing framework MSTest instead of the default NUnit. This is done by adding a line  of configuration to your app.config, you will find the SpecFlow section already exists:

<specFlow>
<!-- For additional details on SpecFlow configuration options see http://go.specflow.org/doc-config -->

    <unitTestProvider name="MsTest.2010"/>
</specFlow>

As a brief aside, you will notice that the feature file has a code behind .cs file that has been created by SpecFlow. If you have a look inside you will see that the code is a rather awkward looking MSTest test class. It is used to run the Scenario’s that are described in the feature file and implemented in the binding step definitions we are about to create. It is not pretty, but it is not intended to be read or edited by a user as it is generated by SpecFlow when the feature file is saved. SpecFlow will generate the correct type of testing hooks for the test framework you are using.

Create the step definitions

The step definitons are where we add code to perform the tests. The step defintions are defined in a file with a [Binding] attribute. When we run the feature the test runner run the test defined in the feature code-behind which is bound to these steps by SpecFlow. The easiest way to create these bindings is to use SpecFlows Visual Studio tooling. Right click on the scenario in the feature file and select “Generate step definitions”. This brings up a dialogue box that allows you to create the necessary bindings to wire up the test. I created the bindings in a new class called EventSearchSteps. The generated bindings look like this:

using System;
using TechTalk.SpecFlow;

namespace CrazyEvents.AcceptanceTests
{
    [Binding]
    public class EventSearchSteps
    {
        [Given(@"that I want to search for an event by region")]
        public void GivenThatIWantToSearchForAnEventByRegion()
        {
            ScenarioContext.Current.Pending();
        }

        [When(@"a list of possible regions is presented")]
        public void ThenAListOfPossibleRegionsIsPresented()
        {
            ScenarioContext.Current.Pending();
        }

        [Then(@"the list should contain ""(.*)"", ""(.*)"", ""(.*)"" and ""(.*)""")]
        public void ThenTheListShouldContainAnd(string p0, string p1, string p2, string p3)
        {
            ScenarioContext.Current.Pending();
        }

        [Then(@"""(.*)"" should be the default value")]
        public void ThenShouldBeTheDefaultValue(string p0)
        {
            ScenarioContext.Current.Pending();
        }
    }
}

The step definitions have been created with a pending implementation ready for us to complete. You will see that some parts of the features have generated steps that contains attributes with what looks like regular expressions, and that the methods have parameters. This is a feature of SpecFlow that allows for reuse of step definitions when the only thing that changes is a string value. We will see how this is used later. SpecFlow has lots of useful features like this, I would advise checking out the documentation to learn more about it.

Lets run the test and see what happens.

Resharper’s test runner shows that the test was inconclusive. This is not really a surprise as the test implementation is pending. Of course you can use Visual Studio’s built in test runner instead.

In the next post we will have a look at how we can implement the bindings with Selenium to automate browser interactions.

The series in full: