Archive for the ‘Testing’ category

Deep Dive into TDD Revisited

August 30, 2007

 

Hi, everyone. I haven’t posted any serious technical content on this blog for a long time now. The reason for this is that I’m now a pointy haired boss most of the time. I spend my days teaching, mentoring, coaching, and occasionally pairing with someone on another team. I miss coding… I really do.

However, I’ve been digging into Interaction Based Testing over the past few weeks, and I’ve found it fascinating. The road I took to get here involved trying to learn more about what Behavior Driven Development is, and why so many people I know and respect seem to like it, or at least appreciate it. One of the techniques that BDD uses is something called Interaction Based Testing, or IBT for short.

Interaction Based Testing

IBT is different from traditional TDD in that it is defining and verifying the interactions between objects as they take place, rather than defining and verifying that some input state is being successfully translated to some output state. This latter kind of testing, called State Based Testing, or SBT for short, is what I had always done when I did TDD (for the most part). IBT involves using a Mock Object Framework that allows you to set expectations on objects that your class under test is going to call, and then helps you verify that each of those calls took place. Here is a short example:

[TestFixture]
public class IBTExample
{
    [Test]
    public void SampleITBTest()
    {
        MockRepository mocks = new MockRepository();

        IListener listener = mocks.CreateMock<IListener>();
        Repeater repeater = new Repeater(listener);

        listener.Hear("");
        LastCall.On(listener).Constraints(Is.NotNull()).Repeat.Once();

        mocks.ReplayAll();

        repeater.Repeat("");

        mocks.VerifyAll();
    }
}

The basic problem that I’m trying to solve here is that I can write a method, Repeat(), on a class called Repeater such that when I call Repeat(), it repeats what it was passed to its IListener. The way that I set this up is more complicated than I would use in a state-based test, but I avoid cluttering my test with irrelevant implementation details (like explicit data).

What this test is doing is creating the system and setting expectations on the IListener that define how the Repeater class is going to use it at the appropriate time. The MockRepository is the class that represents the mock object framework I’m using, which in this case is Rhino Mocks. I new one of these up, and it handles all the mocking and verification activities that this test requires. On the next line, you see me creating a mock object to represent an IListener. I typically would have created a state-based stub for this listener that would simply remember what it was told, for my test to interrogate later. In this case, the framework is creating a testing version of this interface for me, so I don’t have to build my own stub. Next, I create the class under test and wire it together with the listener. Nothing fancy there.

The next line looks a little strange, and it is. It is actually a result of how this particular mocking framework functions, but it is easily understood. While it may look like I’m calling my listener’s Hear method, I’m actually not. When you create an instance of the mocking framework, it is created in a recording mode. What this means is that every time you invoke a method on a mocked out object while recording, you are actually calling a proxy for that object and defining expectations for how that object will be called in your regular code later. In this case (admittedly, not the simplest case), listener.Hear() is a void method, so I have to split the setting of expectations into two lines. On the first line, I call the proxy, and the framework makes a mental note that I called it. On the next line, I say to the framework, “Hey, remember that method I just called? Well, in my real code, when I call it, I expect that I am going to pass it some kind of string that will never be null, and I’ll call that method exactly once. If I do these things, please allow my test to pass. If I don’t do them, then fail it miserably”.

After I set up the single expectation I have on the code I’m going to be calling, I exit record mode and enter replay mode. In this mode. the framework allows me to run my real code and plays back my expectations for me while my real code executes. The framework keeps track of whatever is going on, and when I finally call my application method, Repeater.Repeat() in this case, followed by the mocks.VerifyAll(), it checks to make sure that all expectations were met. If they were, I’m cool, otherwise my test fails.

I hope that was at least a little clear. It was very confusing to me, but I sat down with a few folks at the agile conference two weeks ago, and they showed me how this worked. I’m still very new at it, so I’m likely to do things that programmers experienced with this kind of testing would find silly. If any of you see something I’m doing that doesn’t make sense, please tell me!

Here is the code this test is forcing me to write:

public class Repeater
{
    private readonly IListener listener;

    public Repeater(IListener listener)
    {
        this.listener = listener;
    }

    public void Repeat(string whatToRepeat)
    {
        listener.Hear(whatToRepeat);
    }
}

public interface IListener
{
    void Hear(string whatToHear);
}

Advantages to IBT style TDD

There are several things about this that I really like:

  • It allows me to write tests that completely and totally ignore what the data is that is being passed around. In most state-based tests, the actual data is irrelevant. You are forced to provide some values just so that you can see if your code worked. The values obfuscate what is happening. IBT allows me to avoid putting any data into my tests that isn’t completely relevant to that test, which allows me to focus better on what the test is saying.
  • It allows me to defer making decisions until much later. You can’t see it in this example, but I’m finding that I’m much better able to defer making choices about things until truly need to make them. You’ll see examples of this in the blog entries that are to follow (more about this below).
  • I get to much simpler code than state-based testing would lead me to
  • My workflow changes. I used to
    1. Write a test
    2. Implement it in simple, procedural terms
    3. Refactor the hell out of it

With ITB, I’m finding that it is really hard to write expectations on procedural code, so my code much more naturally tends to lots of really small, simple objects that collaborate together nicely. I am finding that I do refactoring less frequently, and it is usually when I’ve changed my mind about something rather than as part of my normal workflow. This is new and interesting to me.

There are some warts that I’m seeing with it, and I’ll get to those as well, as I write further in this series. I’m also very certain that this technique has its time and place. One of the things I want to learn is where that time and place is. Anyhow, here are my plans for this:

Revisiting my Deep Dive

I want to redo the example I did a couple years ago when I solved the Payroll problem in a 6-part blog series. I want to solve the same problem in a ITB way, and let you see where it leads me. I’ve done this once already, part of the way, just to learn how this worked, and the solution I came up with was very different than the one I did the first time. I’m going to do this new series the exact same way as the old series, talking through what I’m doing and what I’m thinking the whole time. I’m personally very curious to see where it goes.

Once we’re finished, I want to explore some other stories that are going to force me to refactor some of my basic design assumptions, because one of the knocks against ITB is that it makes refactoring harder by defining the interactions inside your tests and your code. We’ll find out.

Please ask questions

I’m learning this stuff as I go, so I’m very eager to hear criticisms of what I’ve done and answer questions about why I’ve done things. Please feel free to post comments on the blog about this and the following entries. I’m really looking forward to this, and I hope you readers are, too.

— bab

Deep Dive into TDD Revisited

Advertisements

Unit Testing with Visual Studio 2008

July 23, 2007

 

Unit testing features will now be available in the Professional version of Visual Studio 2008.

Unit testing has been to Visual Studio what Barry Bonds has been baseball – a center of controversy. First there was the Peter Provost petition to include unit testing features in all version of VS. Then there was the highly criticized TDD guidance accompanying the feature. Next came some performance issues and pain while using the shipping version, and most recently, the TestDriven.NET hullaballoo added an emotional charge to the air.

Putting all this behind us – what’s new in 2008? I’ve been working with the latest bits, and I can say:

  1. Performance has improved dramatically.
  2. The context-menu command “Run Tests” is new (and context sensitive).
  3. Keyboard shortcuts take away the pain of the VS2005 test runner (Ctrl+R, A to run all tests in a solution, Ctrl+R, T to run tests in the current context).

Moving the unit-testing features into the Pro edition is a great move by Microsoft. I hope the feature gains traction and brings awareness of unit testing into the mainstream (although I think we are already close, aren’t we?).

Related Links

Guidelines for Test-Driven Development by Jeff Palermo
Rules to Better Unit Tests by Adam Cogan.

Unit Testing with Visual Studio 2008

Designing for Testability

July 4, 2007

 

EDIT:  What I should really say is that it isnt’ just Designing for Testability, it’s Designing with Testability 

From a question on my Passive View blog post

 “should we design for testability, or should we try and test what’s designed (perhaps designed badly, so we refactor later)?”

Here’s my take:

Done, done, done” isn’t just writing code.  It’s writing code and verifying that that code works correctly.  You don’t ship until the code is proven to work (hopefully).  Designing for testability might cost you extra time in coding (which I would actually dispute somewhat), but can easily save time in the whole by cutting down on the time spent debugging and testing.  I see testability design as a way to optimize the time to deliver, even if it ups the time spent on design or coding.

One of the very painful truths that TDD newbies learn the hard way (myself included) is that retrofitting automated tests to existing code can be very difficult.  Just trying to test what’s designed may not work out very well, and frankly, I have yet to see a codebase that wasn’t built with TDD that was easy to test.

What is Testability design anyway?  Granted, there are some things I do like opening up more public accessors or pulling out more interfaces strictly for testing that could arguably described as “bad.”  However, most of what constitutes designing for testability, or using testability as a design heuristic, is a matter of how best to assign responsibilities by following older design principles that predate TDD by many years.  Achieving testability is mostly a matter of separation of concerns, coupling between classes and subsystems, and cohesion.  Exactly the design qualities that we’ve always strived for to make our code maintainable.  It’s what we’ve been trying to do anyway.  If you practice traditionally good design, you might already be there to testability. 

Testability is, in my opinion, the ultimate design smell detector at the granular level.  If you’re finding it hard to unit test your code, you’ve likely got a coupling or cohesion problem.  Testability is yet one more design tool to stick in your design toolbox right between UML/CRC modeling and code smells.  At least think of it this way, driving an application design at least partially through tests is yet another example of starting with the end in mind.  How will I know that this code works correctly?  How will I know that I’m done?

Yes, using TypeMock or switching to a dynamic typed language will let you more readily create testing seams with less conscious effort, but that’s not the entire ballgame.  Throw testability and orthogonality out the window to write a ball of mud and no amount of TypeMock magic is going to help you out.

Anyway, go ahead and start arguing with me.  As always, comments are open.

Designing for Testability

HTTPSimulator – Simulating HTTP Requests for unit testing made easier

June 29, 2007

 

Phill Haack just released HTTPSimulator – a class to help with running tests against a simulated Http Context, to help simulate requests and more without resorting to needing a web server running for the tests. [via DNK]

I like this approach but I also like Phil’s comments at the end. Read them. He’s right.

Also noted:

“originally tried to do all this by using the public APIs. Unfortunately, so many classes are internal or sealed that I had to get my hands dirty and resort to using reflection. Doing so freed me up to finally get certain features working that I could not before.”

This again goes to the Testable Object Oriented Design (TOOD) idea that I wrote about. Testable design might mean two categories:

  • A Test Enableing\Disabling design (Sealed classes, high coupling etc. is preventing us from writing tests that simulate or stub out parts of the design)
  • A testable\non testable design – meaning you can easily or not easily test the parts in the code you’d like to test.

In this case the HTTP related classes are test-disablers in that we cannot easily stub out their features to test something that relies on them.

HTTPSimulator – Simulating HTTP Requests for unit testing made easier