Dialogues and discussions

At a recent team meeting thoughts quickly turned to problems with an upcoming release and the state of software development at the organisation. A general feeling that things weren’t better anywhere else gave way to the usual suspects of incomplete requirements, no focus on quality and lack of resources. On the list of things that could be done was a meeting with the team and other members of the department including the development manager and the Head of IT.

Comments were made about the idea of a meeting which included:

  • Similar meetings in the past had made no real difference
  • With senior management present people would not feel able to discuss certain topics
  • Nobody would pay any attention until there was a serious failure

I thought the comments seemed plausible as I have been in enough of those types of meetings to know that this type probably wasn’t going to help, at least not without a change of approach. I recalled Peter Senge talking about different types of discourse, and the differences between them, in ‘The Fifth Discipline‘. I grabbed my copy and started reading. Below I have attempted to summarise my understanding.

Dialogues and discussions

British physicist David Bohm developed the idea that dialogue and discussion are related but different concepts, and that is it the synergy between dialogue and discussion that yields its power. Bohm asserts that this power is unlikely to exist if the distinctions between the two are not appreciated. Most people are aware that not all discussions are the same in tone, focus or outcome but few have thought about, and could explicitly call out the differences in their experiences.

For Bohm, a “dialogue can be considered as a free flow of meaning between people in communication, in the sense of a stream that flows between banks” in an effort to gain understanding of a complex issue. A discussion is used to present different points of view with the intent of finding a course of action. The table below summarises the differences:

Discussion Dialogue
Decision Making A discussion converges on a course of action and decisions are made. Complex issues are discussed, but no decisions are made during dialogue. Instead of a decision, a dialogue diverges to gain understanding, not to seek consensus. The outcome from a dialogue may feed into a discussion when a decision needs to be made.
Purpose Although it is not always immediately obvious, in most cases the purpose of a discussion is to “win” the discussion by having your view accepted by the group. It is not hard to see how this might be at odds with the espoused outcome of a discussion. A dialogue is not won at an individual level, but at a group level. A group accessing a larger pool of common meaning which goes beyond any one individuals meaning.
View point The different views of the group are presented and defended to give an overall view of the situation from different directions. The prefered view is then selected from all the possible alternatives. Different views are presented in a safe environemnt as a means to discover a new viewpoint.

I think it’s fairly safe to say that most organisations cannot always distinguish between the two, as indeed, neither could I until I had read The Fifth Discipline. I can relate to situations that I can now identify as being a dialogue, and some that were discussions. That is not to say that it has to be an either / or situation. Once a team can make the distinction they can move between dialogue and discussion as required.

Context for a dialogue

In order to be able to explore complex issues in a safe environment a context must be established. Bohm establishes some initial conditions:

  • All participants must “suspend” their assumptions “as if suspended before us”. This is not to say the assumptions and opinions are bad, just that we should hold up our opinions for observation, questioning and examination. It is vital that participants are free to talk about their assumptions without prejudice or fear. People are naturally cautious about revealing their assumptions but it is not possible to have a productive dialogue if people are defending their assumptions or are simply unaware of them.
  • All participants must regard one another as colleagues. I know what you are thinking; the people I work with are by definition colleagues. This is not the point Bohm was making. Instead it means that the participants must see the dialogue as a mutual quest for insight without feeling vulnerable to office politics, ridicule, coercion or judgement. One of the initial comments about the meeting hit upon an important related idea: that certain topics would not be discussed freely with senior management present. This is a very common problem (or untested assumption?), but it is vital organisational levels are not recreated in the dialogue. Transparency should transcend hierarchy.
  • A facilitator is a prerequisite as without this the dialogue will be pulled towards discussion. As well as providing the usual benefits of facilitation, the dialogue facilitator should keep the dialogue on track. It is their job to ensure that the conditions of the dialogue are been met. Once a team has become accustomed to dialogues they may find that they can control the dialogue without the need for a facilitator.

Team Learning

As with most concepts in Systems Thinking, establishing effective dialogues takes practice to build the trust and respect required. With practice effective dialogue should also lead to effective discussions.

With regards to the original idea of having a meeting with senior management, it seems to me that the first step should definitely be a meeting in the spirit of dialogue rather than discussion. Not least because there were numerous untested assumptions and some underlying tension present. I acknowledge that this is not an easy thing to do, but team learning has to start somewhere. Also I think the group need to make more opportunities for team learning on the whole, where people can share their views in a safe environment. This could be be with a retrospective like activity following a regular cadence.

Dialogues and discussions in software development

Within Agile software development there are parallels between the ideas of dialogues and discussion, and retrospectives. This should not be a surprise as the purpose of retrospectives is also team learning. As with effective dialogue, effective retrospectives take practice to build the requisite trust within the team. Often retrospectives are structured to have distinct sections of dialogue and discussion. The retrospective prime directive attempts to established the correct context, in effectively asking people to suspend their assumptions and biases. Sections such as setting the scene and gathering information try to explore the complex issue without searching for actions and outcomes. When generating insight the team attempts to converge on a possible set of actions, before finally selecting some tangible actions for the next iteration.

Sources:
Peter Senge – The Fifth Discipline
David Bohm – Bohm dialogue

Dynamic Method Invocation in C# with PetaPoco (Part 1)

A look at a powerful development technique as used in the PetaPoco Micro ORM

In a previous post I talked about using PetaPoco on a recent project. I mentioned that it uses Dynamic Method Invocation to create a method on the fly to convert an IDataReader record into a POCO.

The technique itself is very powerful in the right situation, and is used to good effect in PetaPoco (and other Mircro ORMs). If you had to write a method to convert a known IDataReader instance into a known POCO instance it would be strightforward enough. However if you knew neither the format of the values in the IDataReader, not the type of the POCO it would be more difficult. You would have to write quite a convoluted method to be able to deal with it, and the conditional logic (and reflection) would make it perform poorly, especially as there is a potential that the method would be called many times to load a dataset. Instead you could examine the IDataReader and the type and use Dynamic Method Invocation to build a method that works for that exact situation.

There are lots of problems that can be solved by the generalised form of this approach but there are a couple of reasons I think that it is not more widely used; the technique is a little misunderstood and can sometimes feel a bit like magic, and it can look imtimidating because the method body is built by emitting .Net Intermediate Language (which has gone through various guises, but is now known as Common Intermediate Language, or CIL for short) which can be out of some developer’s comfort zone.

The basic idea is that PetaPoco will generate a method to hydrate a POCO at runtime by examining the IDataReader and creating a one off factory method to perform the conversion, so lets dive in and have a look at what PetaPoco is doing. A useful place to start is by looking at how the dynamically created method will be used.

At this point it is worth mentioning that PetaPoco makes use of the Dynamic Language Runtime introduced as part as .Net 4.0 and the ExpandoObject type. This means that you can return a Dynamic ExpandoObject type with the properties matching the fields returned from the IDataReader. It also supports .Net 3.5 without the DLR with #if directives. I am going to assume that we are using .Net 4.0 and will be looking at the code used to return an ExpandoObject.

This particular usage is from the Query method, which returns and IEnumerable, where T is the type of the POCO you want to return. Inside you will find this:

var factory = pd.GetFactory(cmd.CommandText, _sharedConnection.ConnectionString, 
   ForceDateTimesToUtc, 0, r.FieldCount, r) as Func<IDataReader, T>;
using (r)
{
	while (true)
	{
		T poco;
		try
		{
			if (!r.Read())
				yield break;
			poco = factory(r);
		}
		catch (Exception x)
		{
			OnException(x);
			throw;
		}

		yield return poco;
	}
}

Our first clue is that the the Query used the GetFactory method to return a factory which is then used to return the POCOs for the IEnumerable one at at time using the yield return. Notice that the return type from GetFactory is cast to a Func as GetFactory return a Delegate.

public Delegate GetFactory(string sql, 
                string connString, bool ForceDateTimesToUtc, 
                int firstColumn, int countColumns, IDataReader r)

The first part of the GetFactory does some caching to ensure that there is reuse of the factories as far as possible, than a new DynamicMethod is created:

var m = new DynamicMethod("petapoco_factory_" + PocoFactories.Count.ToString(), 
    type, new Type[] { typeof(IDataReader) }, true);

This particular constructor creates an anonymously hosted dynamic method which means the dynamic method is associated with an anonymous assembly, rather than an exisitng type. This isolates the dynamic method from the other code and provides some safety to be used in partial trust environments. The constructor takes a name, a return type (in this case the POCO type), a Type[] containing the parameter types (just the one, the IDataReader) and a bool that sets [italic]restrictedSkipVisibility allowing the method access to private, protected and internal methods from existing types.

So now we have a new DynamicMethod we can start to generate the IL. First get the ILGenerator.

var il = m.GetILGenerator();

Now we need to start building up the method body with IL. Everything you can do with C# can be written in IL (as after all, C# and other .Net languages are compiled down to IL) but the IL can be more verbose. Also IL is a stack based language so operands are pushed onto the stack, then operators pop the operands from the stack and to perform an operation and push the result onto the top of the stack.

There are 3 different sections depending on the type to be returned. Although geared towards POCOs, PetaPoco can happily also return an ExpandoObject or a single scalar value.

First up is the section that returns an Expando with properties mirroring the IDataReader row. We will have a look at the highlights and hopefully learn some IL as we go. The first thing is to create the ExpandoObject and place it in the top of the stack (and also the bottom, as the stack is currently empty).

il.Emit(OpCodes.Newobj, typeof(System.Dynamic.ExpandoObject)
                            .GetConstructor(Type.EmptyTypes));

We are going to need to call a method on the Expando at some point to add the property. A MethodInfo is defined to hold the method meta-data for the Add method of the Expando. It will be used later.

MethodInfo fnAdd = typeof(IDictionary<stringobject>).GetMethod("Add");

Now we can loop through all of the columns in the IDataReader and add them to the Expando using IL. There is some additional logic for some data type conversion to support various IDataReader implementations for some Databases. For the sake of brevity I will leave the converter logic out and focus on the main logic. I have left in the comment from the PetaPoco source.

// Enumerate all fields generating a set assignment for the column
for (int i = firstColumn; i < firstColumn + countColumns; i++)
{
	var srcType = r.GetFieldType(i);

	il.Emit(OpCodes.Dup);	                // obj, obj
	il.Emit(OpCodes.Ldstr, r.GetName(i));	// obj, obj, fieldname

Firstly get the type of the IDataRecord corresponding to column i. Recall that the IL stack already has the Expando we created earlier on it (or to be more precise the reference to the object). Create a duplicate of that object and use Ldstr to push a string reference of the column name.

        // r[i]
	il.Emit(OpCodes.Ldarg_0);	        // obj, obj, fieldname, rdr
	il.Emit(OpCodes.Ldc_I4, i);		// obj, obj, fieldname, rdr,i
	il.Emit(OpCodes.Callvirt, fnGetValue);  // obj, obj, fieldname, value

The next three statements are used to get the value from the IDataReader. Ldarg_0 pushes the argument at position 0 onto the stack, which is the IDataReader. Then Ldc_I4 pushed the int value of  i onto the stack. Callvirt is used to call a method on an object. The fnGetValue has been defined previously. It is a MethodInfo for IDataRecord.GetValue(i). The object in question is the IDataReader and the argument is i. The result of the method call is left on the top of the stack.

        // Convert DBNull to null
	il.Emit(OpCodes.Dup);  // obj, obj, fieldname, value, value
	il.Emit(OpCodes.Isinst, typeof(DBNull)); // obj, obj, fieldname, value, (value or null)

Call dup to duplicate the value on the top of the stack. Isinst is used to check the object on the top of the stack is an instance of a particular type, in this case DBNull. If the value is a DBNull it is cast to DBNull and pushed onto the top of the stack. If it is not DBNull then a null reference is pushed onto the stack. The top of the stack now contains either a reference to DBNull or a reference to Null.

        var lblNotNull = il.DefineLabel();
	il.Emit(OpCodes.Brfalse_S, lblNotNull);	 // obj, obj, fieldname, value
	il.Emit(OpCodes.Pop);			 // obj, obj, fieldname
        il.Emit(OpCodes.Ldnull);		 // obj, obj, fieldname, null
        il.MarkLabel(lblNotNull);
        il.Emit(OpCodes.Callvirt, fnAdd);
    }

This section can be a bit confusing so I will step through it slowly. It defines an if-statement where execution is branched depending on a condition. First define a label. This label will be used as the execution target for the branch. Recall that the top two items on the stack are the value from the IDataReader and either DBNull (if the value is DBNull) or null (if the value is not DBNull).

The Brfalse_S instruction will pop the top item from the stack and check it for a false-ish value, either false, null or zero. If it is false-ish then jump to the label lblNotNull. Remember that the null on the top of the stack represents a non DBNull value which is now left on the top of the stack. Execution now passes to the point marked by MarkLabel and continues with the Callvirt on the fnAdd MethodInfo we defined earlier, which pops the values from the stack to use as arguments until it get the ExpandoObject an then call the method on it. This has the effect of adding a property with the fieldname of column i and the value from IDataReader column i.

If the branch condition is non false (meaning the value on the stack is DBNull) then execution continues without jumping. As DBNull is no use to us in the Expando it is popped from the stack and ldnull is used to push null onto the top of the stack. It is this null value that is then used in the Callvirt method to add a property with a null value to the stack.

These set of instructions are repeated for each column in the IDataReader until the stack contains just a refernce to the ExpandoObject, which has a property for each column. Finally the Expando object is returned.

il.Emit(OpCodes.Ret);

You may be thinking that the above steps could be done without resorting to building up a dynamic method in IL, and you would be correct. It is trivially easy to add named properties to an ExpandoObject.  PetaPoco does it in IL because it is part of a larger area of code that can return an ExpandoObject, a scalar or a POCO depending on what has been asked for.

This covers the case when PetaPoco needs to return a dynamic ExpandoObject. Head over to part 2 to see how PetaPoco goes about returning a scalar value or an typed POCO.

PetaPoco in the wild

A few brief thoughts on my experiences using PetaPoco micro ORM on a ‘real’ project in a production environment.

The problem

I have been involved with several projects that have used Object Relational Mappers, specifically NHibernate and Entity Framework. For these projects the decision to use an ORM was made at the start of the project and the architecture designed accordingly. The project team had full control of the database and the domain of the project was used to drive out the database schema in a code first style. My entire experience (and biases) of ORMs was based on this kind of usage, together with the learning curve that comes with them.

In my latest project the context was very different for a few reasons:

  • The project was a re-engineering of an existing application, which was in fairly bad shape.
  • It featured a hand rolled data access layer with lots of conflicting and ambiguous helper methods that still required lots of boilerplate to deal with things like mapping the results to POCOs and dealing with connection management.
  • A rather limited domain model, often with one class representing multiple domain entities.
  • Very limited control of the databases in use. One in particular appears to be from a time when Hungarian notation was de rigour and vowels were something to be frowned upon when naming tables and columns. Add to that the spectre of Entity Attribute Value pairs and no primary keys.

After stabilising the code base with sensible separation of concerns and lots of refactoring I wanted to replace the data access layer. I had already separated out the data access stuff into repositories, but the hand rolled data access still left a lot to be desired.

Introducing something like Entity Framework (my current ‘full scale’ ORM of choice) didn’t feel like the right thing to do for a few reasons:

  • First and foremost it would be a lot of work to retrofit EF and would cause a lot of disruption for the project, and could seriously impede forward progress. Although not immense, the learning curve for the team would still take time to overcome.
  • The databases are not ORM friendly in any way. Dozens of poorly named tables, each with dozens (some with hundreds) of poorly name columns.
  • EF might just be a bit too heavy weight. The project isn’t huge, and just needs something to take care of the heavy lifting.

Choosing an ORM

I was aware of micro ORMs and their aims through a couple of the more popular examples, Dapper and Massive. Both are notable by their creators. Dapper was created by Sam Saffron (of Stack Overflow) and is used to drive the data access on the Stack Exchange family of sites. Massive was created by Rob Conery of TekPub.com.

Although both were promising, neither seemed quite right. Whilst looking at the performance comparisons of ORMs from Dapper’s project page and I noticed another micros ORM called PetaPoco by Topten Software and on closer inspection it appeared to meet most of my vague criteria:

  • It was lightweight. Although not as small as either Dapper or Massive, it is packed with feature that help to take the pain out of data access.
  • It can be used with POCOs without the need for attributes or sub-classing.
  • It uses SQL to interact with the database. In light of the project this seemed sensible as it would aid refactoring of the data access code. It would be possible to do a straight replacement of the repository methods reusing the existing SQL with PetaPoco’s functionality, and then expanding the domain model as required. This would allow for minimum disruption.
  • It has a notion of transactions, including nested transactions. This is useful as I had introduced a unit of work to deal with transactions across different service methods and had need for some transactions.
  • It had to perform relatively well. The project didn’t call for best of class performance, it just had to be good enough. But its good to know that PetaPoco has it covered fairly well.

Getting started

I grabbed the latest version of PetaPoco core from Nuget. There is also a version with some T4 templates for generating POCO entities from your database. I didn’t need this functionality, and in most cases the database should model the domain model, not the other way round. One of the big advantages of working code first was to keep the focus on the domain and not on the persistence.

PetaPoco is added to your project as a single .cs file, not as a referenced assembly. This would turn out to be useful later on. Many of the micro ORMs work in a similar way and are fairly straight forward wrappers for IDBCommand. If you have ever created your own DAL layer using IDB command you have probably been two thirds of the way to a mirco ORM.

PetaPoco makes the most of the latest C# features. The heavy lifting of materialising entities from the database into objects is done through Dynamic Method Invocation. When a particular entity is requested a method is generated on the fly that takes a IDataReader record and returns the required type. The methods are generated by examining the IDataReader’s columns and creating a custom mapping for the IDataReader record to the entity. The generation is done by emitting and then executing IL at runtime. It’s a deceptively powerful technique when applied to problems like this.

If you aren’t afraid of using the Dynamic feature of c# there is support for that also. During the transitional stages of the project I find it useful for the situations where the domain model is not fully realised and data may be coming from multiple tables at once. Just execute a select and a dynamic object is returned with the properties from the select. Similar things can be done for inserts and updates.

First impressions

My first impressions are overwhelmingly positive, but there are a few things to take into account. In summary:

  • It is very easy to get things up and running. Download the package from NuGet and add a connection string and you are good to go in a couple of minutes.
  • The syntax is easy and intuitive. Most things work as you would expect. There is currently no support for updates to tables with composite keys. In my opinion is an serious oversight. I appreciate that PetaPoco is geared towards working with well designed clean databases, with sensible names and keys, however I think it could easily be used with legacy databases with a few tweaks. Half an hour with the source (I told you it would be useful to have the .cs file) and I had added composite key support for updates. I would hope to see composite key support added at some point.
  • Although I was initially drawn to the POCO support, the syntax is more friendly when using attributes on the entity classes. Indeed some things like result only columns (i.e not persisted) that can only be used through attributes. I found that the more I used PetaPoco the more comfortable with attribute on the classes.

Overall I think that PetaPoco (and micro ORMs) are probably enough for a lot of the application you want to create, especially the type of line of business applications that a lots of people spend their type writing. Before my experiences using a micro ORM on a ‘real’ project I would have always opted for one of the feature rich, enterprisey ORMs, but now I would investigate to see if a micro ORM would work just as well without the weight. They are certainly not only a niche product for hobby projects, and I hope to see more wide scale use in business environments.

Time will tell…

Test Driven Development with ASP.Net MVC (Part 4)

The series in full:

In Part 3 we got the first test up and running. Now we can get straight on with adding more tests and functionality to meet the needs of the User Story. A quick look at the Acceptance Criteria suggests the next functionality to implement:

Scenario 1 – Search criteria options are pre populated
GIVEN that I want to search for an event by region
WHEN I select to choose a region
THEN a list of possible regions is presented

I think at this point I would like to expend on the Acceptance Criteria again, to make it a bit more explicit.  You shouldn’t be afraid the make changes and clarification as you go, it is all part of the process.  So I will change Scenario 1 to:

Scenario 1 – Search criteria options are pre populated
GIVEN that I want to search for an event by region
WHEN I select to choose a region
THEN a list of possible regions is presented
AND the list should contain "All", "North", "South" and "London"
AND "All" should be the default value

This looks a little better to me.  It does also make me think about another Scenario that we need to cover but more about that later. But first, we can start with another test:

        [TestMethod]

        public void PresentAListOfAllPossibleRegionsForTheUserToSelectFrom()

        {

        }

The next part of my TDD process is to think about the behaviour of the functionality that the Controller will need to perform, and how it will achieve it within the architecture of the system. We need some data, and the application will have a service layer, so this is a good indication that we will need a service of some kind.  We also need to get some data to the view so are going to need a model of some kind. Lets fill in the test.

        [TestMethod]

        public void PresentAListOfAllPossibleRegionsForTheUserToSelectFrom()

        {

            //Arrange

            var mockEventService = new Mock<IEventService>();

            var response = new GetAllRegionsResponse();

            response.Regions = new List<RegionDto>();

            response.Regions.Add(new RegionDto {Id = 1, Name = “North”});

            response.Regions.Add(new RegionDto {Id = 2, Name = “South”});

            response.Regions.Add(new RegionDto {Id = 3, Name = “London”});

            mockEventService.Setup(x => x.GetAllRegions()).Returns(response);

            var eventController = new EventController(mockEventService.Object);

            //Act

            var result = (ViewResult)eventController.Search();

            var model = (SearchViewModel)result.Model;

            //Assert

            Assert.AreEqual(3,model.Regions.Count(),

                “Unexpected Number of Regions returned”);

        }

There is a bit going on in the test, so I will try to explain it.  Remember that the red text is Resharpers way of telling us that the implementation is missing, so there is lots of work to do to get this test up and running.  As we are defining the test first and them implementing to make it pass, we only implement the functionality we know we are going to need, not the functionality we think we may need in the future.

The first line is creating a mock for the IEventService. My mocking framework of choice for .Net is Moq. The next part is creating a Response object to be returned from the mock service when we call the GetAllRegions method.

mockEventService.Setup(x => x.GetAllRegions()).Returns(response);

is setting the behaviour of the mocked IEventService. It basically says: when the GetAllRegions method of the IEventService is called, return the Response I have described in the test, not the actual response from the real implementation of IEventService.

Now we are using a service we would expect it to get passed into the constructor of the Controller, IOC style. The assertion for this test will ensure the the 3 regions from the service call are populated on the model.

Now the test is complete we can start on the implementation. I tend to start at the top of the test and fill in the implementation as I go. My advice is to leverage the refactoring capabilities of ReSharper (or your tool of choice – even standard Visual Studio has tooling to take some of the pain away). As part of the implementation I am going to introduce Ninject to take care of the IOC duties.

The steps to complete the implement go a little something like this:

  • Add a new project for the services and related things
  • Setup Ninject for the IOC
  • Create and service contract and an empty service implementation
public interface IEventService
    {
        GetAllRegionsResponse GetAllRegions();
    }
public class EventService : IEventService
    {   
        public GetAllRegionsResponse GetAllRegions()
        {
            throw new NotImplementedException();
        }
    }
  • Create the Response and RegionDto
public class GetAllRegionsResponse
    {
        public List<RegionDto> Regions;
    }
public class RegionDto
    {
        public int Id { getset; }
        public string Name { getset; }
    }
  • Create the SearchViewModel
 public class SearchViewModel
    {
        private List<RegionDto> regions = new List<RegionDto>();

        public List<RegionDto> Regions
        {
            get { return regions; }
            set { regions = value; }
        }
    }
  • Update the Controller to use the new service
public class EventController : Controller
    {
        private readonly IEventService eventService;

        public EventController(IEventService eventService)
        {
            this.eventService = eventService;
        }

        public ViewResult Search()
        {
            var model = new SearchViewModel();

            model.Regions = eventService.GetAllRegions().Regions;

            return View(model);
        }
    }

So lets fire up the tests.  But wait a second there is a problem with the first test we did as the constructor is expecting an argument, but we are not giving it one.  It is an easy fix but teaches us something about the nature of TDD; that you can never assume that tests are written and then forgotten. As the functionality becomes more complex, you will often have to revisit tests as at the time they were based on the most simple implementation that would make the test pass, which has now evolved into something more complex.  We can change the test by adding in the mock for the EventService.

        [TestMethod]

        public void DisplayTheDefaultSearchEventView()

        {

            //Arange

            var mockEventService = new Mock<IEventService>();

            var eventController = new EventController(mockEventService.Object);

            //Act

            var result = (ViewResult)eventController.Search();

            //Assert

            Assert.IsTrue(String.IsNullOrEmpty(result.ViewName));

        }

Any copy and paste is usually an indication of the opportunity for refactoring, and this is no exception. We shall look at refactoring the tests after we have them working. Lets run the tests.

Part4 Test Results 1

Part4 Test Results 1

There is some good news and some bad news. The new test is passing, but now the old test is failing. The reason for this is that we are not supplying an behaviour for the mockEventService, so when the method is called it is returning a null Response and we are essentially asking for null.Regions. Again we can fix this easily by adding the setup for GetAllRegions to the first test.  This is the second duplication of code, so some refactoring of the test class is definitely on the cards.  A quick run of the test lets us know the change was successful.  Just before we refactor the test class, I had a thought that I want to make my second test a little stronger, so I added an additional Assertion.  It is not uncommon to make tests more specific in this way, as you uncover more about the behaviour you want to test for.

            Assert.AreEqual(1, model.Regions.FindAll(x => x.Name == “North”).Count,

                “Expected to find ‘North'”);

I could test for all three regions, but it felt a bit like overkill at this stage. Let’s run the tests.

Part4 Test Results 2

Part4 Test Results 2

That’s more like it. Let the refactoring commence. We can move out the duplication into a [TestInitialize] method to share the implementation amongst all the tests in the class. I have also added a [TestCleanup] method to run after each test has run. This contains a check to verify that all the behaviours specified in the mock setup have been run, so the test will fail if you expect it to do something that it does not do, even if all the Assertions pass. Here is the test class in full.

[TestClass]
public class WhenSearchingEventControllerShould
{
    private EventController eventController;
    private Mock<IEventService> mockEventService;
    private GetAllRegionsResponse response;

    [TestInitialize]
    public void Setup()
    {
        //Arrange
        mockEventService = new Mock<IEventService>();

        response = new GetAllRegionsResponse
            {
                Regions = new List<RegionDto>
                    {
                        new RegionDto {Id = 1, Name = "North"},
                        new RegionDto {Id = 2, Name = "South"},
                        new RegionDto {Id = 3, Name = "London"}
                    }
            };

        mockEventService.Setup(x => x.GetAllRegions()).Returns(response);

        eventController = new EventController(mockEventService.Object); 
    }

    [TestCleanup]
    public void Teardown()
    {
        mockEventService.VerifyAll();
    }

    [TestMethod]
    public void DisplayTheDefaultSearchEventView()
    {
        //Act          
        var result = eventController.Search();

        //Assert
        Assert.IsTrue(String.IsNullOrEmpty(result.ViewName));
    }

    [TestMethod]
    public void PresentAListOfAllPossibleRegionsForTheUserToSelectFrom()
    {  
        //Act
        var result = eventController.Search();
        var model = (SearchViewModel)result.Model;

        //Assert
        Assert.AreEqual(3,model.Regions.Count,
            "Unexpected Number of Regions returned");
        Assert.AreEqual(1, model.Regions.FindAll(x => x.Name == "North").Count,
            "Expected to find 'North'");
    }
}

You can see that the setting up of the Regions is tied to this test class. If we needed a test that had a different setup for the call to GetAllRegions, it may be an indication that the we are testing a different set of behaviours and that the new test should belong in a new test class.

So now we have a second Controller test, however there is still work to do to get the things up and running in an Outside In manner. Check out Part 5 to see how we Test Drive the Service method we are mocking in the Controller tests, and get it talking to the Repository.

The series in full:

Test Driven Development with ASP.Net MVC (Part 3)

The series in full:

In Part 2 we looked the feature we are going to implement, along with the behaviour it needs to exhibit. In this post we will quickly cover the architecture of the application, and then get on with the implementation.

System Architecture

A lot of the examples of using TDD with MVC are fairly simple and straightforward.  Whilst this keeps the implementation simple, I though that my example should try to reflect the real world a bit more, without going overboard.  In order to really show the outside in approach with vertical slices, I need more layers to slice through.

The “Crazy Events” application 

The application will have the following components:

  • UI  Layer- ASP.Net MVC3
  • Service Layer – For the example this will be a simple in-process service layer to hold the business logic.  It could just as easily be WCF Web Services.
  • Repository Layer – To encapsulate the data access.  For the example they provide access to in-memory data, but would more likely be talking to a database, maybe with a suitable ORM.

So, from the layers we should be able to implement test first by testing the Controller action, the services and an relevant repository methods that are appropriate for testing.  We could also test the MVC routes, but I will leave that for another post.

The starting point

I am going to start with an almost empty solution with just two projects, an empty MVC3 Project, and an empty test project.  I am going to use MSTest, but any Unit Testing framework could be used. You may already have a application template in place, with all the projects and dependencies for the layers and such already created. That’s fine too. I just want to show that TDD doesn’t have to be scary, even with an empty solution.

The first test…

Ok, so here we are, ready to start implementing the event search functionality.  Writing the first test can be hard, where do you start? I always try to start with the most simple test that will make progress towards the goal.

MVC offers us some help.  The URL routing in MVC can be very helpful in deciding where to go, as it too describes the behaviour of the web site.  If we were to navigate to the event search page, we may well navigate to http://www.crazyevents.com/event/search. This would translate to a Controller called Event with an Action called Search.  This looks like a good place to start.

The most straight forward first test would be to test that the search action on the event controller returns the default view as per the MVC conventions. Lets write the test:

 [TestClass]

    public class WhenSearchingEventControllerShould

    {

        [TestMethod]

        public void DisplayTheDefaultSearchEventView()

        {

            //Arrange

            var eventController = new EventController();

            //Act

            var result = (ViewResult)eventController.Search();

            //Assert

            Assert.IsTrue(String.IsNullOrEmpty(result.ViewName));

        }

    }

It is possible to tell from the above code that this test would fail. It is safe to say it will not even compile. Lets take this as the first “red” stage.

At this stage I just want to talk for a second about ReSharper. For those not familiar ReSharper is a Developer productivity tool that removes a lot of friction from working within Visual Studio. It supports lots of quick navigation and re factoring options, and generally just makes the whole experience a little more enjoyable. There are similar tools available, and it is possible to work without it.

The red text is ReSharper’s way of saying that something is missing. In this case it is suggesting that the EventController and Search method are missing, so lets create them.

Firstly I create the EventController. There a few ways to do this. I tend to use the ReSharper Action to create the class, which will create the class in the same file. It doesn’t matter for now that the controller is in the wrong place, it can be moved easily later. If you prefer you could create the file in the correct place (in the Controller folder of the MVC Project) and add the necessary Using statement. Remember we need to add just enough code to get the test up and running, so the next step is to add the Search method on the Controller. The file can be moved to its rightful home if you haven’t done so already.

using System.Web.Mvc;

namespace CrazyEvents.Controllers

{

    public class EventController : Controller

    {

        public ViewResult Search()

        {

            return View();

        }

    }

}

There is only one last thing to sort out, we are missing a view. So I create one now. It will be very simple, and will not be Strongly Types at this stage, as we do not yet know what Model it will use.

@{

    ViewBag.Title = “Search”;

}

<h2>Search</h2>

I know, not much to look at yet, but remember this is just one of several steps to get us up and running from a standing start. A quick run of the test shows us we are on the right track. Notice ow the test class and test method name make a readable behavioural statement.

Unit Test Session

One down, many to go…

So we now have a functioning, albeit a fairly useless MVC application. We can always run it, just to make sure everything is okay. A quick addition to the route is required to register our new functionality. I will add it to the ‘root’ route for now, so the search page loads by default. It can always be changed later.

routes.MapRoute(“Root”, string.Empty, new { controller = “Event”, action = “Search” });

Running the app will result in a fairly uninspiring page. In the next post I will continue to build up the functionality one test at a time.

The series in full:

Test Driven Development with ASP.Net MVC (Part 2)

The series in full:

In Part 1, I talked about my view on TDD, and the particular flavour I use when doing TDD with MVC. In this post I am going to have a look at the feature we want to implement.

An event booking system

The functionality we are building is for an event booking system.  There is a database full of events, and people can use the web site to search for events and make a booking.

We are interested in implementing the search functionality. We may be fortunate to have analysts, testers and developers working together to craft the user stories with the product owner, but at this stage it doesn’t really matter how the user stories were created.  What matters is that there is a framework of ideas to build from.  They don’t have to be perfect, but it is difficult for the system to emerge without something to kick-start the ideas.

We have the following user story:

As a potential event attendee, I want to search for events, so that I can see all the available events that match my criteria

We also have some screen mock-ups to guide us.  A couple of the more relevant ones are shown below:

Event Search Screen mock-up 1

Event Search Screen mock-up 1

Event Search Screen mock-up 2

Event Search Screen mock-up 1

From the mock-ups, what behaviour do we want to cover? There are lots of potential scenarios, but lets focus on a few of the more straight forward ones for now:

• Search criteria options are pre populated
• Search for an event by region
• Search for an event by name
• No results returned from search
• Search criteria is ‘remembered’ when the results are shown

You will see that some of the scenarios overlap. For example, having no search results returned could well be part of searching for an event by name or region, however I find it better to cover the scenarios explicitly for the benefit of test separation in the case of both unit and acceptance testing.

From the scenario list we can now flesh out some examples to use:

Scenario 1 – Search criteria options are pre populated
GIVEN that I want to search for an event by region
WHEN I select to choose a region
THEN a list of possible regions is presented

Scenario 2 – Search for an event by region
GIVEN the following events exist

Event Code Event Name Region Description
CR/0001 Crochet for Beginners North A gentle introduction into the art of crochet
CR/0002 Intermediate Crochet North Taking your crochet to the next level
CH/3001 Cat Herding London A starter session for the uninitiated

WHEN I search for events in the “North” Region
THEN the search results present the following events

Event Code Event Name Region Description
CR/0001 Crochet for Beginners North A gentle introduction into the art of crochet
CR/0002 Intermediate Crochet North Taking your crochet to the next level

Scenario 3 – Search for an event by Name
GIVEN the following events exist

Event Code Event Name Region Description
CR/0001 Crochet for Beginners North A gentle introduction into the art of crochet
CR/0002 Intermediate Crochet North Taking your crochet to the next level
CH/3001 Cat Herding London A starter session for the uninitiated

WHEN I search for events with the name “Cat Herding”
THEN the search results present the following events

Event Code Event Name Region Description
CH/3001 Cat Herding London A starter session for the uninitiated

Scenario 4 – No Search results return from search
GIVEN the following events exist

Event Code Event Name Region Description
CR/0001 Crochet for Beginners North A gentle introduction into the art of crochet
CR/0002 Intermediate Crochet North Taking your crochet to the next level
CH/3001 Cat Herding London A starter session for the uninitiated

WHEN I search for events in the “South” Region
THEN I am informed that no search results have been returned

Scenario 5 – Search criteria is ‘remembered’ when the results are shown
GIVEN I have performed a search in the "North" region
WHEN the search results page is displayed
THEN the search criteria displays my choice of region

In Part 3 we will start implementing the application, guided by the behaviour defined in the scenarios.

The series in full:

Test Driven Development with ASP.Net MVC (Part 1)

In this series of post I am going to talk about how I implement systems in Asp.Net MVC using TDD.  It maybe a little different to some other TDD examples as I am also going to talk a bit about the supporting activities, and the wider thought processes involved.

The series in full:

My thoughts on TDD

A lot of words in books and articles have already been written about TDD, and I am assuming that most readers will have some knowledge of at least the theory behind it. To reiterate the basics:

  • Write a test
  • Watch the test fail
  • Make the tests pass with the most simple implementation possible
  • Write another test
  • Repeat (Red, Green, Refactor)

Software is built through lots of iterations of first failing then passing unit tests. TDD becomes less about testing and more about design as the system is driven out by the fulfilment of tests.

But even for people familiar and competent with the canonical examples of TDD, using TDD together with a more complex multi layered application can be daunting.  Most modern systems fall into that category, and I will attempt to use an example that is a bit more like the situation you may find yourself in in real life.

One of the things I have experienced when using TDD is that every practitioner has their own style, their own ‘flavour’ and dialect for their tests.  I don’t think that there are any right or wrong ways to do TDD, as long as the overall goals are achieved:

  • The design of the application is emergent
  • The logic is exercised accordingly
  • The tests provide confidence when refactoring or adding new functionality
  • The code is inherently testable, through the application of good design principles and practices

My own flavour of TDD for MVC is guided by the flowing principles:

  • Behavioural in nature – When embarking on development it is vital to understand the behaviour of the thing you are going to build.  For this reason, it should be specified in a suitable manner.  My preference is for user stories, with a strong set of acceptance scenarios (I tend to use the Given When Then format) and screen mock ups where possible (Balsamiq is my choice). The tests can then focus on meeting the behavioural needs, not just the technical ones.
  • Vertical Slices – I prefer to work through vertical slices of functionality, so that the User Story and scenarios relate to a discrete piece of functionality that can be tested and demonstrated as a whole.  This is very different from the traditional approach of working through the layers of an application layer by layer.
  • Outside in – I like to start at the outside of an application and work inwards.  Where the “outside” of the application is depends on your approach. For this example I am going to treat the Controller as the “outside” and work downwards towards the data layer.  Other approaches can see the UI layer (the view in MVC) marking the “outside” of the application.  I will discuss how these two approached differ in a short while.
  • Descriptive Naming – I aim to have the test names read as complete sentences that descript the behaviour on test.  It is not an exact science and often undergoes some changes as the tests progress.

Isn’t this Behaviour Driven Development?

People who are familiar with BDD will instantly recognise some of the terminology, and certainly my approach borrows heavily from the BDD movement.  I am reluctant to call this approach BDD for a few reasons:

  • I am not testing the true behaviour of the website from the UI itself; rather I am testing how the controller fulfils the behavioural needs of the UI.  A small difference, but an important one.  A true BDD approach would be to drive the UI with a tool like WatiN  in conjunction with a BDD specific tool like SpecFlow, which can be used as way to test at a feature level, even without having the tests drive the UI, but I think this weakens the overall effect.
  • Although I am focussed on behaviour, the tests are still unit tests (albeit with behavioural names).  The tests are still in the technical domain and do not abstract away all the technical details. Again, a true BDD approach would see the feature tests specified by their behaviour, not their technical specification. Once again BDD tools help with this, but it can add more weight to the process.
In Part 2 the discussion moves to the feature that needs to be implemented.

The series in full: