Cognitive Biases

During another brown bag lunch session I gave a short presentation about cognitive biases.

Cognitive biases are limitations in thinking, judgement and decision making that affect human thought processes. Knowing about them is the first line of defence again something that effects each of us.

The presentation can be found here

Remember to have a look at the slide notes, as it has some more information not included on the slides themselves.

The foundations of agile – Lean Software Development, Theory of Constraints and Systems Thinking

During a recent brown bag lunch I gave a presentation on the foundations of agile. I talked about the some of the underlying philosophies and methodologies that have influence agile software development.

The presentation covered a little bit of history before looking at Lean Software Development (and it development from the Toyota Production System), The Theory of Constraints and Systems Thinking.

The presentation can be found here.

Remember to have a look at the slide notes, as it has some more information not included on the slides themselves.

Managing parent and child collection relationships in Entity Framework – What is an identifying relationship anyway?

Some of the posts on this site are more as a reminder to myself about some problem that I have encountered and the solution I have found for that problem. This is one such post. The issue addressed below is fairly common and has been described and solved more comprehensively elsewhere but I have added this so I can find it again in the future should the need arise. If it proves helpful to anyone else then all the better. 

It is often desirable to have a parent entity that has a collection of child entities. For example a Order entity may have a collection of OrderLine entities, or a User entity may have a collection of Role entities. These example may look similar but they have a subtle difference that may cause some behavioural issues in some circumstances.

In the first example the order lines belong to an order and it doesn’t make sense for them to exist on their own. In the second example a Role can exist independently from the User.

This is the difference between an identifying and non-identifying relationship. Another example from Stack Overflow:

A book belongs to an owner, and an owner can own multiple books. But the book can exist also without the owner and it can change the owner. The relationship between a book and an owner is a non identifying relationship.

A book however is written by an author, and the author could have written multiple books. But the book needs to be written by an author it cannot exist without an author. Therefore the relationship between the book and the author is an identifying relationship.

Whatever the type of relationship it would be nice to be able to just add and remove child items to the parent collection and let EF worry about all the bothersome adding and removing from the Database.

Adding to the collection works correctly in both cases and if you have a non identifying relationship (your child foreign key is nullable) you will also be fine when removing items from the collection.

If you have only implied an identifying relationship (your child primary key is on the table Id field and your child foreign key is non nullable) you are likely to be in for a world of pain trying to remove items from the child collection.

When you remove an item from a child collection with the above kind of relationship Entity Framework tries to update the child entity foreign key to null. This causes problems as the foreign key is non-nullable and you get this System.InvalidOperationException when the context saves:

The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted.

The way to stop this from happening is to make the identifying relationship explicit by specifying the foreign key field of the child to be part of its primary key. I know that in the ORM world composite keys are not really in vogue, but this make things explicit to Entity Framework that the child cannot exist without its parent and so should be deleted when it is removed from the parents collection.

How do I specify a composite key?

To set the composite keys simply annotate the entity (or use the fluent API as part of OnModelCreating in the Context). If you have an OrderLine entity that has an identifying relationship with an Order entity the OrderLine entity should be annotated like this:

public class OrderLine
{
    [KeyColumn(Order = 0), DatabaseGenerated(DatabaseGeneratedOption.Identity)] 
    public int Id { getset; }

    public string Description { getset; }

    public Decimal Cost { getset; }

    [KeyForeignKey("Order"), Column(Order = 1)]
    public int PupilId { getset; }

    public virtual Order Order { getset; }
}

Generating SpecFlow reports for your TeamCity builds

You may or may not be aware that SpecFlow has the ability to produce some reports giving information about the features and steps after a test run has finished. The test execution report contains details about the SpecFlow test features, scenarios and steps. The step definition report shows the steps and their statuses, including unused steps and steps with no implementation.

The SpecFlow documentation details how the reports can be generated using SpecFlow.exe at the command line. Whilst it is useful to be able to manually generate these reports, it would be better if we could generate them as part of the TeamCity build process and make the reports available from within the build details of TeamCity. This makes the reports more accessible to all team members and especially useful when showing your product owner the scenarios covered by your acceptance test suite.

Lets run through getting things up and running.

1. Running the tests using nunit-console.exe

In order to generate the SpecFlow reports you must first run the tests using nunit-console.exe in order to generate the correct output files to feed into the report generation. The SpecFlow documentation tells us this is the command we need to run.

nunit-console.exe /labels /out=TestResult.txt /xml=TestResult.xml bin\Debug\BookShop.AcceptanceTests.dll

As we need to run nunit-console.exe it is not possible to use TeamCity’s NUnit runner. There are a few different runner types we could run this command such as Command line or Powershell, but the one that best fits our needs is probably MsBuild, but as always the same results can be achieved in lots of different ways.

In Visual Studio create a build file. I have given mine the imaginative name of nunit.build

<?xml version="1.0" encoding="utf-8" ?>

<Project ToolsVersion="3.5" DefaultTarget="Compile"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

<PropertyGroup>
    <NUnitHome>C:/Program Files (x86)/NUnit 2.6.2</NUnitHome>
    <NUnitConsole>&quot;$(NUnitHome)\bin\nunit-console.exe&quot;</NUnitConsole>
    <testResultsTxt>&quot;$(teamcity_build_checkoutDir)/TestResult.txt&quot;</testResultsTxt>
    <testResultsXml>&quot;$(teamcity_build_checkoutDir)/TestResult.xml&quot;</testResultsXml>
    <projectFile>&quot;$(teamcity_build_checkoutDir)\CrazyEvents.AcceptanceTests\CrazyEvents.AcceptanceTests.csproj&quot;</projectFile>
</PropertyGroup>
 
<Target Name="RunTests">
  <Exec Command="$(NUnitConsole) /labels /out=$(testResultsTxt) /xml=$(testResultsXml) $(projectFile)"/>
</Target>  
    
</Project>

I am not an MSBuild expert and it took me a while to get the paths correctly escaped.  If you are getting build errors check the paths first. TeamCity has some properties that you can reference in your build files to give you access the build related things. I am using teamcity_build_checkoutDir to specify that the output from the build is placed in the root of the checkout directory.

In TeamCity your build step should look something like this:

SpecflowTeamCityNUit1

With the build file in place and the step configured you may want to give things a run to make sure everything is working as expected. When you do you will notice that there is something missing from TeamCity.

SpecflowTeamCityNUnit2

If you had used the built in NUnit test runner you would expect to see a Test tab in TeamCity with the details of the tests run as part of the build. The tab is missing because TeamCity doesn’t know about the tests been run by the MsBuild task. The next step will show how to get the test tab and results back.

2. Getting the test tab to show in TeamCity

Luckily for us the good people at TeamCity have created an NUnit addin that does just that. To use the addin in needs to be present in the NUnit addin folder. The best way to ensure the addin is present in to add a Copy task to the MSBuild target. The full build file now looks like this:

<?xml version="1.0" encoding="utf-8" ?>

<Project ToolsVersion="3.5" DefaultTarget="Compile"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <ItemGroup>
        <NUnitAddinFiles Include="$(teamcity_dotnet_nunitaddin)-2.6.2.*" />
    </ItemGroup>

    <PropertyGroup>
        <NUnitHome>C:/Program Files (x86)/NUnit 2.6.2</NUnitHome>
        <NUnitConsole>&quot;$(NUnitHome)\bin\nunit-console.exe&quot;</NUnitConsole>
        <testResultsTxt>&quot;$(teamcity_build_checkoutDir)/TestResult.txt&quot;</testResultsTxt>
        <testResultsXml>&quot;$(teamcity_build_checkoutDir)/TestResult.xml&quot;</testResultsXml>
        <projectFile>&quot;$(teamcity_build_checkoutDir)\CrazyEvents.AcceptanceTests\CrazyEvents.AcceptanceTests.csproj&quot;</projectFile>
    </PropertyGroup>

    <Target Name="RunTests">
        <MakeDir Directories="$(NUnitHome)/bin/addins" />
        <Copy SourceFiles="@(NUnitAddinFiles)" DestinationFolder="$(NUnitHome)/bin/addins" />
        <Exec Command="$(NUnitConsole) /labels /out=$(testResultsTxt) /xml=$(testResultsXml) $(projectFile)"/>        
    </Target>

</Project>

An addins directory is created and the two addin files are copied from TeamCity to the directory. It is important to specify the correct version of the addin files in the NUnitAddinFiles section for the version on NUnit you are using. With the build file checked in the next build will have the missing Test tab, and the test summary will be displayed on the run overview.

SpecflowTeamCityNUnit3

3. Running the SpecFlow reports

To run the SpecFlow report I have used another build task to make two calls to SpecFlow.exe with the files generated as output by NUnit. The final build file looks like this:

<?xml version="1.0" encoding="utf-8" ?>

<Project ToolsVersion="3.5" DefaultTarget="Compile"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <ItemGroup>
        <NUnitAddinFiles Include="$(teamcity_dotnet_nunitaddin)-2.6.2.*" />
    </ItemGroup>

    <PropertyGroup>
        <NUnitHome>C:/Program Files (x86)/NUnit 2.6.2</NUnitHome>
        <NUnitConsole>&quot;$(NUnitHome)\bin\nunit-console.exe&quot;</NUnitConsole>
        <testResultsTxt>&quot;$(teamcity_build_checkoutDir)/TestResult.txt&quot;</testResultsTxt>
        <testResultsXml>&quot;$(teamcity_build_checkoutDir)/TestResult.xml&quot;</testResultsXml>
        <projectFile>&quot;$(teamcity_build_checkoutDir)\CrazyEvents.AcceptanceTests\CrazyEvents.AcceptanceTests.csproj&quot;</projectFile>
        <SpecflowExe>&quot;C:\Users\Admin\Documents\Visual Studio 2010\Projects\CrazyEventsNew\packages\SpecFlow.1.9.0\tools\specflow.exe&quot;</SpecflowExe>
    </PropertyGroup>

    <Target Name="RunTests">
        <MakeDir Directories="$(NUnitHome)/bin/addins" />
        <Copy SourceFiles="@(NUnitAddinFiles)" DestinationFolder="$(NUnitHome)/bin/addins" />
        <Exec Command="$(NUnitConsole) /labels /out=$(testResultsTxt) /xml=$(testResultsXml) $(projectFile)"/>        
    </Target>

    <Target Name="SpecflowReports">
        <Exec Command="$(SpecflowExe) nunitexecutionreport $(projectFile) /xmlTestResult:$(testResultsXml) /testOutput:$(testResultsTxt) /out:&quot;$(teamcity_build_checkoutDir)/SpecFlowExecutionReport.html&quot;"/>
        <Exec Command="$(SpecflowExe) stepdefinitionreport $(projectFile) /out:&quot;$(teamcity_build_checkoutDir)/SpecFlowStepDefinitionReport.html&quot;"/>
    </Target>

</Project>

Now we just need to get the html output of the reports to show in TeamCity.

4. Displaying the SpecFlow reports as tabs in the build output

Again, the people at TeamCity have made this really easy. First it is necessary to set the required files as build artifacts in the build configuration.

SpecflowTeamCityNUnit4

The final part of the jigsaw is to specify the report html file artifacts to be used as report tabs. Head over to Report tabs section of the Administration page and specify the new report tabs.

SpecflowTeamCityNUnit5

Now your tabs will be visible in TeamCity.

SpecflowTeamCityNUnit6

Getting started with automated acceptance tests in an Agile project

Automated testing can bring great benefits to an Agile project but it is not always clear how to best integrate the processes and practices effectively. In this post I will outline a few ideas that might help, along with some pitfalls, based on experience gained from my own projects.

What is automated acceptance testing?

I am using the term automated acceptance tests to mean tests that exercise a system from it’s most external interface. For the purposes of this post I will be mainly talking about tests that exercise the user interface but it could also mean tests that exercise an externally visible API such as a web service. The important thing about the tests is that they test the expected behaviour of the system, ensuring that the desired functionality is met. This builds an executable specification for the system.

What does it bring to an Agile project?

In this post I am going to use terms that are from Scrum in particular, but the points covered are applicable to any Agile process (and more than likely non Agile processes as well).

Agile projects are focused on meeting the needs of their customers (Product Owner in Scrum terms) and one of the ways this is done it to use acceptance criteria to define the expected behaviour of a particular user story. It is very valuable to be able to prove that that the acceptance criteria have been satisfied, whilst building a very comprehensive suite of automated regression tests. A suite of automated acceptance tests gives incredible confidence when building the system, especially in Agile developments where there may be refactoring and changes of direction as the changing needs of the Product Owner are met.

Where does automated testing fit into the process?

One of the most important things I have learnt from using automated acceptance testing on an Agile project is that it has to be a first class part of the development process. Depending on the maturity of your team this may mean having an explicit step or task to ensure it gets done, or having it as part of your definition of done. Either way it needs to be given the same level of importance as the other practices. A story cannot be done without automated tests that at least cover the acceptance criteria of the story. It is almost certain that when time is tight and people are under pressure that the automated tests will be the first things to be skipped. This is almost always a mistake, and one which I have learnt the hard way. Missing acceptance tests will come back to haunt you. Include them in your process and account for them in your planning activities.

Who writes the automated tests?

The creation of automated acceptance tests requires input from all the members of an Agile team. The roles and make up  in your Agile team may differ, but typically acceptance criteria will be generated with the input of the product owner and analyst (and maybe also the tester in some teams) who will also create specific scenarios that will be automated. As the tests are automated there is obviously some development work in order to perform the automation. There are ways to gain a lot of reuse form a test suite and to reduce the amount of additional development required to extent the suite. I will talk about some of these when looking at some specific tools and technologies in a later section.

What does it mean to ‘traditional’ testing for the project?

In recent years I have become more convinced that it is no longer possible to speak about the traditional silo’d roles in software development. The roles of analyst, developer and tester continue to become blurred as roles change and responsibilities overlap. The creation of automated acceptance tests require a little bit of all these traditional disciplines. Who will be involved in your team depends on the skills and abilities of the team members. To make automated testing work in your team it will be necessary to let go of the idea that developers don’t do testing, or that testers don’t write code.

That said, automated acceptance tests are not meant as a replacement to traditional exploratory testing or to undermine the role of traditional testing in any way. Instead they should be seen as complimentary.

This is a great post that discusses the aims of automated and exploratory testing:

Automated vs. exploratory – we got it wrong!

Automated acceptance testing as part of the build process

To get the most out of automated acceptance tests they need to be part of your Continuous Integration build. Acceptance tests are never going to be fast running, and this can pose some challenges to getting the best feedback from your tests. As an indication, my current project has about 300 Selenium based tests and takes about an hour to run. We are 7 sprints in and the number of tests is going to get much higher. At present we have a fast CI build running only the unit tests to give instant feedback, followed by an integration build containing all the acceptance tests that runs after a successful CI build. We are finding that the feedback is a bit slow and are planning to introduce the concept of a ‘current’ integration build that will run a subset of the acceptance tests that have been tagged as being in the area which fast feedback is required (i.e. area in which work is being undertaken). The full set of tests will continue to run at their slower cadence.

Using the right tools

My tools of choice are SpecFlow and Selenium Web Driver, but there are others that perform a similar function. It is important to understand the tools and the best way to use them. When talking about automated acceptance tests, and specifically using Selenium to drive the browser, I often encounter people who have tried to do automation and then have stopped as the tests become brittle and the cost of maintenance becomes too high.  Here are a few tips from my experience of using automated acceptance tests on a large project:

  • Aim to get a lot of reuse from your SpecFlow scenarios. Use the features built into SpecFlow, such as the Scenario.Context.Current and it’s DI capabilities, to make steps that can be reused across many features. If done correctly you will find that it is very easy to add new tests without writing any new steps at all.
  • Us the page object pattern to encapsulate the logic for the web pages. This is the one of the single biggest factors to ensuring your test suite is maintainable. Without page object you will find yourself fixing lots of tests due to a small change to a single page.
  • Thing about the wider ecosystem. We are using Selenium Server to give us the ability to run tests across multiple browsers.
  • Learn about you tools to get the best from them. Writing the automation will be a lot more pleasurable of you understand how things are working.

The posts below discuss how to make a start writing maintainable automated acceptance tests:

Behavioural testing in .Net with SpecFlow and Selenium (Part 1)

Behavioural testing in .Net with SpecFlow and Selenium (Part 2)

Managing SpecFlow acceptance test data with Entity Framework

Summary

  • Automated acceptance tests are about building an executable specification for your system.
  • The executable specification is very useful to to prove the acceptance criteria have been met and to give confidence that changed have not broken functionality.
  • Make automated tests part of your process. Do not skimp on them, even if you are under pressure to get things done. Add them to your definition of done.
  • Creating the tests is an activity for all the team, although traditional roles and responsibilities may be blurred.
  • Automated tests are in addition to traditional exploratory style testing and are not meant as a replacement.
  • Get the most from your tests by choosing a sensible CI strategy to give the best feedback
  • Choose the right tools for the job, and spend time learning to get the best from them
  • Become an advocate of automated testing in your organisation

Managing SpecFlow acceptance test data with Entity Framework

A look into how Entity Framework can be used to simplify the management of data for your acceptance test scenarios.

In a previous set of posts I looked at how it is possible to use SpecFlow and Selenium to create automated acceptance tests driven via the browser.

An important part of any acceptance or integration test is the test data. It is important that the data for each test is isolated from any other test. Often these tests require complicated data setup which can add additional complication and maintenance to to the test suite. This post assumes that the system under test is using a SQL database.

Traditional ways to manage test data

There are a few ways to manage the test data for acceptance tests. Probably the most widely used are:

  • Run some SQL scripts as part of the test setup and tear-down – Can be effective but means extra maintenance of the scripts is required. This type of script can be prone to errors as they are built outside of the type safety that the rest of the system benefits from.  This approach also requires a mechanism to run the scripts when the tests run.
  • Use a standard set of data for all tests – the main issue with this approach is that it may not allow for proper isolation of the test data. It is possible to use transactions or some similar mechanism to allow for this, but sometimes it is only the order of the running tests that ensures the correct data is available for the tests.

There are of course other ways, and many variations on the theme.

A better approach..?

It would be better if it were possible to add the data as part of the test setup in a type-safe way, using the language of the system without the need for external scripts.

If you are using an ORM in your application, it is very likely that you already have the tools needed to do just this. In this post I am going to use Entity Framework 5, but you could just as easily use nHibernate or something else to get the same result.

An example

Here is the SpecFlow scenario we are going to automate.

@eventSearch
Scenario: Search for events in a region and display the results
	Given that the following regions exist
	| Id | Name   |
	| 1  | North  |
	| 2  | South  |
	| 3  | London |
	And the following events exist
	| Code    | Name                              | Description                                   | Region Id |
	| CR/1001 | Crochet for Beginners             | A gentle introduction into the art of crochet | 1         |
	| CH/3001 | Cat Herding                       | A starter session for the uninitiated         | 3         |
	| CH/3002 | Cat Herding - Advanced techniques | Taking it to the next level                   | 3         |
	When I navigate to the event search page
	And I select the "London" region 
	And perform a search
	Then The search results page is displayed
	And the following results are displayed
	| Code    | Name                              | Description                           | Region |
	| CH/3001 | Cat Herding                       | A starter session for the uninitiated | North  |
	| CH/3002 | Cat Herding - Advanced techniques | Taking it to the next level           | North  |

It is only the first two steps of the scenario that we are interested in for this post. Even before we start satisfying the first step we should make sure that the database is in the correct state to start adding the data for the test.

To do this we can use a BeforeScenario step. As the name suggests, this step is called before each scenario is run. You will notice that our scenario had a tag of @eventSearch. The same tag will be used by the BeforeScenario attribute to ensure that the BeforeScenario method is only triggered by scenarios that have the @eventSearch tag. This becomes important when you have multiple feature files in the same project.

[BeforeScenario("@eventSearch")]
public void Setup()
{
    using (var context = new CrazyEventsContext())
    {
        context.Database.ExecuteSqlCommand("DELETE FROM Events DBCC CHECKIDENT ('CrazyEvents.dbo.Events',RESEED, 0)");
        context.Database.ExecuteSqlCommand("DELETE FROM Regions DBCC CHECKIDENT ('CrazyEvents.dbo.Events',RESEED, 0)");
        context.SaveChanges();
    }
}

We are using the ExecuteSqlCommand method to clear the contents of the tables and reseed the identity columns of the tables. If the tables are not used as part of a foreign key relationship it is possible to use TRUNCATE TABLE.

The next step is to use Entity Framework to populate the region and event details from the SpecFlow table. Although I have these steps as part of a scenario, you could also use a Background step to have the data populated at the start of each scenario.

[Given(@"that the following regions exist")]
public void GivenThatTheFollowingRegionsExist(Table table)
{
    var regions = table.CreateSet<Region>();
 
    using (var context = new CrazyEventsContext())
    {
        foreach (var region in regions)
        {
            context.Regions.Add(region);
        }
                
        context.SaveChanges();
    }
}
 
[Given(@"the following events exist")]
public void GivenTheFollowingEventsExist(Table table)
{
    var events = table.CreateSet<Event>();
 
    using (var context = new CrazyEventsContext())
    {
        foreach (var crazyEvent in events)
        {
            context.Events.Add(crazyEvent);
        }
 
        context.SaveChanges();
    }
}

SpecFlow has some table helper extension methods. I am using table.CreateSet(T) to create an IEnumerable of the specified type T from the data in the table. For this to work your table column names need to be the same as the properties of your entity (you can use spaces for readability if you like).

After I have the list of entities, I simply use the Entity Framework context to add each entity to the context in turn. Remember to call SaveChanges() to perform the update.

And that is all there is to it. The nice thing about this method of managing test data is that the data required for the test is specified as part of the test itself. Everyone can see what data is expected to be present when the test is run.

The easy way to run Selenium and SpecFlow tests in a TeamCity build with IIS Express

The series in full:

In Behavioural testing in .Net Part 1 and Part 2 I looked at creating behavioural acceptance tests with SpecFlow and Selenium. At some point after you have created your tests you will probably want to run them as part of a Continuous Integration build, or a nightly build or similar on a CI server such as TeamCity.

As the tests rely on browser interaction they obviously need the website to be running on the build server in order for the tests to function. There are several ways to get a web application up and running on the build server as part of the build, Web Deploy is probably one of the best.

However there are times you might want to get things up and running quickly as part of a spike or to just try something out. I am not suggesting that this is necessary enterprise strength but it may well be enough depending on your requirements.

This is not a TeamCity tutorial, so I am going to focus on the build steps required.

Step 1 – Build the project

This is fairly obvious. You cannot do much without a successful build.

Step 2 – Start the web application with IIS Express

IIS Express is a lightweight web server that is generally used for development purposes. It can be used in various ways and with different types of web application. For more information on how it can be used see this page. The interesting bit is that IIS Express can launch a web application directly from a folder from the command promp, and can be used for various types of application including ASP.Net, WCF and even PHP.

A command line runner is in order.

The text for the custom script is

start C:\"Program Files (x86)"\"IIS Express"\iisexpress.exe /path:"%teamcity.build.checkoutDir%\CrazyEvents" /port:2418

This tells TeamCity to start a new command window and run the IIS Express executable. The /path: parameter tells IIS Express which folder contains the web application. I have used a TeamCity parameter to specify the checkout directory. You need to make sure this points to the directory of your web application. If it is not correct the build log will let you know exactly where it is pointing and you can adjust it as required.

The /port: parameter specifies the port that the website will run on. This needs to be the same as the port specified in your tests.

Just a quick not about the script. TeamCity has some peculiar behaviour regarding double quotes in the command strings. It took a little while for me to get it right. If you have problems but it looks correct just double check the quote placement.

As another side note, you will not be able to see the command window when the step runs, but it will be running and you should be able to navigate to the website on the build sever.

Step 3 – Running the tests

Run the tests in the usual way with the correct build runner.

Step 4 – Stop IIS Express

The final step is to stop IIS Express. We will use another command line runner.

The custom script is:

taskkill /IM IISExpress.exe /F

which forces the IISExpress instance to close. Once again, you cannot see this in action but it is working in the background.

And that is all there is to it, quick and easy. Behavioural testing doesn’t have to mean a painful setup and teardown experience.

Follow

Get every new post delivered to your Inbox.

Join 26 other followers