Archive for September 2007

Outlook using Word 2007 as HTML Rendering Engine

September 7, 2007


Interesting postings here, here and here.

August 30th Links: ASP.NET, ASP.NET AJAX, IIS7, Visual Studio, Silverlight, .NET

September 7, 2007


Here is the latest in my link-listing series.  Also check out my ASP.NET Tips, Tricks and Tutorials page for links to popular articles I’ve done myself in the past.

  • ASP.NET Charting with NPlot: Olav Lerflaten has a great article on that describes how to use the free NPlot chart engine for .NET to create professional scientific charts of data using ASP.NET.

  • Export GridView to Excel: Matt Berseth has another excellent post on how you can export data to Excel from within your ASP.NET application.

  • Using Coordinated Universal Time (UTC) to Store Date/Time Values: Scott Mitchell has a useful article that describes how to use the UTC format to store date/time values within a SQL database so that it is transportable across timezones.  This is important to think about if your business operates in multiple geographic locations (or if your hosted web-server is located in a different time zone).

  • Fixing Firefox Slowness with localhost on Vista (or XP with IPv6): One annoying issue I’ve run into with Firefox is that sometimes – when doing localhost development – it can take several seconds to connect back to a local page.  It turns out this slowness is caused by a Firefox IPv6 issue with DNS resolution.  Dan Wahlin has a good pointer on how to fix this.

  • ASP.NET AJAX Exception Logging: Kazi Manzur Rashid has a nice article that shows how to create an effective error logging system with ASP.NET AJAX to catch and record client JavaScript errors.

IIS 7.0
  • Developing IIS7 Modules and Handlers with the .NET Framework: Mike Volodarksy from the IIS7 team has an excellent step-by-step blog post that describes how you can now write HttpModules and HttpHandlers using managed code that participate in all requests to a web-server.  This enables you to easily do scenarios that previously required custom C++ ISAPIs to achieve.

  • LINQPad: Joseph Albahari has an incredibly awesome LINQ query expression tool that you can use to quickly try out LINQ expressions.  Think of it as SQL Query Analyzer – but with LINQ expressions as the queries.  Definitely a useful free tool to add to your toolbox.

Visual Studio
  • The SQL Data Tools in VS 2008: Rick Strahl talks about some of the new database schema comparison, data comparison and SQL Refactoring features within Visual Studio 2008.

  • Recreating ITunes in Silverlight: Jose Fajardo has an absolutely fantastic blog with a ton of Silverlight content on it.  One of the projects he has been working on has been recreating Apple’s ITunes Media Player using Silverlight.  Check out his multi-part blog series that discusses step-by-step how he built it.  Absolutely brilliant.

  • New Halo3 Video using Silverlight: The new Halo3 preview video was recently posted to the web – using Silverlight 1.0 to build a custom viewer and stream a HD version of it.  Click here for a lower resolution version if you are on a slow network.

  • Sudoku for Silverlight: David Anson has built a cool online sample using Silverlight that helps you play the popular Sudoku game.  Useful for both Sudoku addicts and developers wanting to learn Silverlight.  

  • Font Embedding and RSS data in Silverlight: Tim Heuer has a cool blog post that shows how you can create your own font-type and embed it within your Silverlight 1.0 application.  He then has his application retrieve dynamic content from an RSS feed and use the custom font to display it.  You can run the finished application online here (all of the text here is dynamic – not a screen-shot).

  • Silverlight Drag and Drop JavaScript Framework: Roberto Hernandez-Pou has a nice article and sample that describes how to implement a drag/drop mechanism for Silverlight 1.0 (using JavaScript).  This article is in both Spanish and English – scroll down if you are looking for the English version.

  • Pascal Support for Silverlight: RemObjects Software now has a project template for VS 2008 that enables you to write Silverlight 1.1 .NET applications using Pascal.  It is kinda wild to see a screenshot of FireFox on the Mac, running a Silverlight application, written with a Pascal code-behind file built with VS 2008. 

  • LLBLGen Pro V2.5 released: Last week Frans Bouma released the latest version of LLBLGen Pro, which is an excellent ORM implementation for .NET.  New features include richer auditing, authorization, and dependency injection support.

  • IronLisp: A new codeplex project has recently started that provides the beginnings of a LISP implementation for .NET which is built using the new DLR (dynamic language runtime) framework for .NET.  Earlier this month I was thinking to myself “I really need to spend some time using LISP or Scheme”.  Now I can do it on .NET.  Sweet.

Hope this helps,


August 30th Links: ASP.NET, ASP.NET AJAX, IIS7, Visual Studio, Silverlight, .NET

How Long Before Microsoft Releases a Mock Object Framework

September 7, 2007

Interesting and strong opinions. I’d like to find out the reasons behind each so that I can make an informed judgement when push comes to shove and everyone in our organisation is moved to TFS based technologies exclusively. 

It’s a bit early for predictions for 2008… but here’s mine, anyway…

NUnit wasn’t invented by the borg, so they made their own (crappy) developer testing framework…
NAnt wasn’t invented by the borg, so they made their own build script and runner…
Cruise Control wasn’t invented by the borg, so they made their own (crippled) build server…
Windsor wasn’t invented by the borg, so they made their own (lame) IoC container…
Aspect# wasn’t invented by the borg, so they made their own aspect framework…
NHibernate wasn’t invented by the borg, so they made their own (mostly crappy) ORM framework…

I guess RhinoMocks is next.

How Long Before Microsoft Releases a Mock Object Framework

Trying to answer hard questions about Agile development

September 7, 2007


My immediate organization (formerly Finetix) largely uses XP/Scrum practices, but much of our larger parent organization is still new to Agile development.  Since more and more clients are asking for Agile project delivery, several of my coworkers and I were asked to participate in an Agile roundtable event.  The roundtable was asked a series of five questions one at a time and then went around the table.  To the best of my recollection, here is a distillation of my responses.

Yes, my answers are very biased, but I’ve come by these bias honestly.  I’ve had mostly positive experiences with XP/Scrum, and nothing but irritation from working with waterfall mentalities.  It’s probably true that many shops don’t really do a strict waterfall in practice, but in a way that’s even worse to give lip service to the waterfall while working adaptively on the side.  Those shops are living a lie.  They’re trying to work adaptively since that’s the only real way to succeed at complex projects, but their project lifecycle simply doesn’t support that adaptation.  Any decent usage of an Agile process is built around gathering feedback and making adaptations.  We ought to just start telling ourselves and management the truth and bring this adaptation out into the open sunshine where it’s easier to control.

What about Fixed Bids?

I am largely passing on a question about doing Agile on a fixed bid, fixed deliverable project.  I don’t have any first hand experience, and my cheeky response is that it works the exact same way as any other process.  You do your damndest to estimate the work by breaking it down into as fine grained tasks as possible then work stupidly long hours when the estimate turns out to be wrong.  Yeah, the extra upfront work to do the estimate isn’t necessarily Agile, and probably makes the whole of the work more expensive than it would be otherwise, but the situation is what it is. 

I don’t really think the Agile answer to a fixed bid is any different from any other process.  I do think that Agile practices and project management can give you far more control and feedback on the “Iron Triangle” of resources, time, and features.  Agile/RUP/CMMI/waterfall whatever, the iron triangle constraints still apply.  If you try to lock all three constraints you’re in for either pain, unhappiness, or protective sandbagging in your estimates.  I would still choose to use Agile delivery for fixed bid projects because I think that is the most efficient way to execute and allows for the ability to fail “softly” with some fraction of the features instead of total abject failure to deliver any features on time like a waterfall project.

How do you create agile requirements? Some teams define stories poorly and then discover later that implementing these stories takes much longer than estimated. Does a note card’s worth of information provide adequate detail for a good estimate? Is it a problem if estimates are highly inaccurate?

I’m going to be brief here because I have a “Jeremy length” essay on Agile requirements coming soon that hits this topic in detail.  The short answer is that user stories on note card’s are definitely not enough information for good estimates.  That’s okay though, because that isn’t all the information.  Story cards are primarily a project management tracking device plus a way of creating a common language for the team.  The note cards are simply an icon representing some finite amount of work.  The actual estimates are produced by the team with the developers tasking out the larger feature with plenty of help from the anaylysts, customers, and project manager, but that conversation absolutely has to happen.  In an Agile project you are very consciously trading in intermediate deliverables for far more face to face communication.  If I’m allowed to have my way, the detailed requirements will be captured in the form of acceptance tests and largely automated – but only close to the time that a given feature is actually put into development.  There’s no use in doing detailed analysis for some feature that ends up getting scrapped.  That’s just waste.

Check out this link from Jim Shore:  Beyond Story Cards: Agile Requirements Collaboration

How is Agile different from RUP? Each has development and release in an iterative process. Unlike waterfall, both can have late inclusion of requirements.

I had fun with this one.  I should probably say that while I have plenty of “book” knowledge about RUP, I’ve never used it in anger.  I’ve spoken to many people that very happily transitioned from RUP to XP or Scrum and refused to ever go back.  I did try to champion a conversion to RUP at a former employer before running away to join the XP circus.  This is a partial repeat of an earlier post, but who cares?


Are they Iterative?
RUP is supposed to be iterative and the founders of RUP will turn their faces blue saying this.  The problem for me is that RUP still includes a lot of waterfall mentality baggage in the form of project phases.  The iterations are much longer and seem to really amount to mini-waterfall projects in their own right.  The typical RUP iteration lengths I’ve heard ranged from 6 weeks to a couple months
Extreme Programming uses 1 to 2 week iterations, Scrum teams originally worked in 30 day sprints, but the cross pollination with XP has led to shorter iterations.  Testing is engaged much earler in an Agile team. 

RUP is commonly disparaged for its dizzying array of intermediate deliverables.  Most are optional and teams are meant to pick and choose which deliverables are appropriate for their project.  Some of the RUP deliverables may simply be an excuse to justify the purchase of the Rational lifecycle products.
XP and Scrum used to be described as low ceremony, but that might be a bald faced lie.  The project management “ceremonies” are simpler, but have to be followed closely.  XP in particular will have more impact on the minute by minute activity and behavior of the team than any other process.

Rational is a software company that makes their living from selling their lifecycle tools.
There are tools from vendors that support or aid with Agile processes and practices, but by and large, Agile teams use far more open source tooling than non-Agile teams. 

I think that RUP is largely created by people with C++ experience.  Coding is nasty, brutish, and hard.  The only way to succeed is regimented discipline.  Quality gates and the creation of documents like the Software Architecture Document.
Most of the early Agile leadership had a background in Smalltalk.  Coding can be productive and pleasant, but the extreme flexibility of the language requires an internalized discipline.  There might not be many intermediate deliverables in XP/Scrum, but XP/Scrum requires a very disciplined approach from the developers in the form of Test or Behavior Driven Developement, Continuous Integration, and Simple Design (much harder than that sounds).


All UML all the time.  The Three Amigos were all architects and RUP has a strong emphasis on architecture.
The Simplest Thing that could Possibly Work, the Last Responsible Moment, You Aren’t Gonna Need It (basically, don’t ever do anything more complicated than what you need *right now*).  Most Agile proponents eschew UML, but I’d write this off more to personal preference than a real prohibition.  I do think that CRC cards and Responsibility Driven Design is more effective inside rapid iterations than even informal UML.

You’re an Agile team, but inside of a Waterfall environment.  Is it possible to remain Agile?  What compromises must you make?

You will absolutely have to compromise.  Your team may be flexible in regards to your feature ordering and iteration planning, but the waterfall team isn’t.  If a waterfall team needs something from the Agile team, that feature simply needs to be made a priority and played in the next iteration.  The other way around is more tricky.  Waterfall teams generally work off of longer plans and don’t have that type of flexibility.  If you’re going to be dependent upon work from a waterfall team you have to treat that dependency as a constraint. 

Here’s where I think the bitter irony is, the waterfall teams purport to be more predictable because they have a linear project plan, but those plans are rarely accurate unless the plans are constantly adjusted in the face of feedback.  Because those plans are never truly accurate, we need to be able to adapt if it turns out the waterfall team is late with their work.  I think that the flexible delivery schedule of rapid iterations should give us more ability to simply switch to working on other features

As an Agilist I try to make all design decisions at the Last Responsible Moment.  In the case of a dependency on a waterfall team, my Last Responsible Moment comes much, much earlier than it would if that same feature was completely controlled by the Agile team.  Much of software design is being cognizant of what design decisions have to be made and the appropriate time to make that decision.  In other words, you need to decide when to decide.  In this particular case, I have to make decisions earlier than normal just to determine the concrete needs from the waterfall team early enough to get those needs into the waterfall teams project plan.

There is a very serious mismatch in terminology and vocabulary between teams using XP or Scrum and other teams doing more traditional waterfall work.  They think we’re out of control and we think they’re largely nuts.  You will have to invest some time with the other management to make them understand, or agree on a compromise for, how the Agile team is going to communicate progress.  There are no real intermediate deliverables on a typical Agile project.  I take the fairly common viewpoint that the only real measure of progress is features completed that are potentially able to be shipped.

Project staffing can be the killer in my experience.  To really do an iterative lifecycle we really need to include the analysts and testers as part of the holistic team — and leave them there!  Part of the enduring allure of nice, linear waterfall lifecycles is the belief that analysts can be rolled off of a project early and testers only need to be on the project at the end.  It’s a nice little myth, but I’ve seen nothing but severe pain from that type of project staffing in practice. 

Trying to answer hard questions about Agile development

Extend Model View Presenter to ASP.Net 2.0

September 7, 2007


using ASP.Net 2.0 advanced features

Extend Model View Presenter to ASP.Net 2.0

BDD and "How Are You Going to Use That Information"

September 7, 2007


Fred George has a post called “And How Are You Going to Use That Information?” that strikes at the heart of the analysis practices in Behavior Driven Development.

I’ve been looking for a question like, “and how are you going to use that information”.  My team is probably going to get really fed up hearing this particular question as I often use variants of it when trying to drive home the importance of BDD to YAGNI-avoidance.  Fred’s way makes it much easier for me.

Here’s an example from a recent release planning meeting:

Other: I’d like to see the current SVN build stamp in the footer of the app’s web pages.
Me: Why?
Other: So I can include it in feedback on the pre-alpha previews.
Me: Can you give me a story for that?

My assumption here is that once the story is surfaced, the person I’m speaking with will see that the feature won’t be valuable.

Here’s a the same conversation (theoretically) from Fred’s universe:

Other: I’d like to see the current SVN build stamp in the footer of the app’s web pages.
Me: Why?
Other: So I can include it in feedback on the pre-alpha previews.
Me: And how are we going to use that information?
Other: To track which release the feedback applies to (presumably).

Since we’ve got an extremely small number of people in our preview pool (less than 10 presently), and since we’ve got very little functionality released for preview, the build number won’t really help with the actions that we take based on feedback.

With such a small dev team, and so few previewers, and so little functionality, it’s near impossible for anyone who is invested in the project to not know what feedback pertains to.

The build number stamp is superfluous to our ability to respond to feedback and take action based on the feedback.  Our review of the inbound feedback, and the prioritization of tasks happens as a matter of agile planning, which is an information-immersive negotiation for stakeholders and designers.

No one could possibly not know what the feedback pertains to and therefore having further qualification by build number doesn’t enable us to fulfill our responsibilities any better.  It does however arbitrarily cost us the time of implementing the build number stamp.

This information might be useful to us later – when we have more releases, more features, more feedback, etc.  If we were to put in the effort to build it now, we would – in Lean terms – be incurring inventory cost.

Asking “How are you going to use that information” forces us to look at the behaviors that capitalize on that information, or that are enabled by that information.  When we just declare that we need some piece of information or other without justifying it with a concrete user story, we’re just doing model-driven or data-driven design.

When I focus first on behaviors – especially user goals as expressed based on scenarios with realistic context – the understanding of the necessary data will simply become clear and evident.  If I get distracted in the details of data-driven and model-driven analysis, I run a really high risk of accumulating unneeded inventory and its associated costs.

Drill down on behaviors.  Find out if they are real and substantial, and then figure out the data needs.  Starting at data is often a crap shoot at best.  Behaviors will surface data needs, but the opposite is almost never true – not in any substantial and meaningful way that works against incurring inventory in any case.

BDD and “How Are You Going to Use That Information”

Look here for the hard answers

September 7, 2007


Last week I made a post called Trying to answer hard questions about Agile development.  The one question I had to largely duck because of a lack of experience was the dreaded “Fixed bid, fixed scope, whither Agile?” question.  It sparked a bit of conversation and some other posts that actually take on the question of Agile on fixed bid projects:

On a related note, check out Agile development in a FDA regulated setting.

Look here for the hard answers

BDD, TDD, and the other Double D’s

September 7, 2007


Behavior Driven Development (BDD) has been a pretty big topic in some of the email groups I lurk in.  I’m seeing BDD cast as a whole new paradigm of development, where as I see BDD as an evolution of TDD with a better syntax and mechanics for expressing the desired functionality with tests/specifications.  That’s more than enough advantages to jump into BDD and plenty to be excited about, but not enough to designate BDD as a whole new paradigm.  Andy Glover described BDD as TDD Done Right and I concur.

Part of the impetus to move to BDD is appealing to people who were always turned off by TDD.  That’s great, but I’m seeing people trying to skip the “driving a design through unit tests” discipline part of TDD to skip right to executable requirements at the acceptance level.  Jeremy, you might say, that’s just some useless terminology from Agile yesterday that we need to let go of!  That’s not baggage so much as it’s a set of hard lessons learned. 

Any testing/specification is better than no testing/specification, but in 4+ years of practicing TDD I’ve learned over and over again that it’s more efficient to write fine-grained unit tests first before proceeding on to making the acceptance and integration tests pass.  It’s great to have the business facing tests/specifications written before coding, but you don’t work with them until you’re reasonably confident from unit tests.  It’s all about the feedback cycle and reducing a big problem into small, manageable parts.  It’s easier to shake out most of the problems with the code at the unit level first than it is to try to code up to a big coarse grained test first.  If you try to skip right to coarse grained tests you’re likely to spend some time in debugger jail.  I’ll scream until I’m hoarse that it’s best, and even more efficient in developer time, to do multiple levels of testing instead of only coarse grained tests.

Besides, if you want to know how to code in a way to make BDD succeed at both the unit and acceptance levels, you’re going to end up learning almost all of the lessons we TDD practitioners have already faced.  Don’t even think about ditching all of the TDD design lore when you move to BDD.  Besides, those cool executable requirements specifications in xBehave?  They’ll work a lot smoother if your code is testable (loosely coupled, highly cohesive) and there’s already quite a bit of lore on how to do that in the TDD canon.

How many of these Double D’s do I need?  How “Driven” must I be?

While technically you need zero of the Double D’s to deliver working code, many of the xDD practices and methods are quite complementary.  Rebecca Wirfs-Bock has a nice summary of many of the Double D’s.  I’ll take my own crack at how the Double D’s are different and where they complement each other:

  • Test Driven Development / Behavior Driven Development – Low level design, unit testing as well.  Should lead to low level specifications for the code, especially with BDD.
  • Acceptance Test Driven Development – Capturing business requirements as executable tests.  Do NOT confuse this with TDD/BDD.  Not all that common in practice, but highly advantageous.
  • Responsibility Driven Development – A design technique that puts an emphasis on the assigning of responsibilities to the consituent classes in an OOP system with plenty of terminology for identifying and classifying responsibilities.  I think RDD is a perfect complement to TDD/BDD because of its lightweight nature
  • Domain Driven Design – The Domain Model pattern done right for logic intensive systems.  DDD is also a natural ally of TDD because it leads to a system that’s far easier to drive through unit tests than a traditional n-Tier system written in a procedural Transaction Script style.  RDD is also a natural complement to DDD to help guide the assignment of responsibilities between entities and services.
  • Model Driven Development – I threw this one in just to be complete, but it’s coming from a completely different world view than the other Double D’s, and I’ve always been dubious on this one.  I suppose DDD could easily be combined with MDD.  I used to worry that I was in danger of missing the boat if it really took off, but that thought hasn’t crossed my mind for years.  Go make up your own mind

BDD, TDD, and the other Double D’s

LINQ Makes Life Easier

September 7, 2007


LINQ is really cool.  There are tons of examples on the web that show you how to use LINQ to query a database, xml file, text file or collection.  This is what LINQ will primarily be used for; to simplify and support the need to provide an association between external relational data and object oriented programming.  With LINQ, records can be extracted from a database and encapsulated into a type-safe object that you can use in .NET just like any other object.  In my opinion, this is really going to change the way that we develop data-driven applications.

The change is likely going to take some getting used to.  The first time you look at a LINQ expression, it just looks weird.  But after you start to understand it, you might realize that it is very similar to SQL statements (select..from..where..orderby).  Then it just clicks and you are off writing all kinds of crazy queries. 

Why are developers going to take the time to learn LINQ?  Personally, I think it makes life a lot easier.  Compare any SQL LINQ example on the web to the .NET 2.0 code that you would have to write to accomplish the same thing.  LINQ is more concise and easier to implement.

For example, let’s say that I want to create a collection that contains a reverse sorted list of filenames from a certain directory.  I also want to have the associated creation time, last access time, and last write time for each file.  Well I could create a class that implements these properties and then populate instances of the class in a collection by doing something like “foreach (string f in Directory.GetFiles(path))” and then calling the appropriate File methods to fill in the data that I want.  Not to hard, just a simple class definition and then a loop and a few method calls.

What if I could do all of that in less than 10 lines of code?  

var files =
from file in Directory.GetFiles(@"c:\windows")
orderby file descending
select new
FileName = file,
CreationTime = File.GetCreationTime(file),
LastAccessTime = File.GetLastAccessTime(file),
LastWriteTime = File.GetLastWriteTime(file)

Is that not a much easier way of doing it?  And then you can use the “files” object just like any other collection to consume the data.

foreach (var file in files)
Console.WriteLine("{0}\n\tC: {1}\n\tA: {2}\n\tW: {3}\n",

There are so many uses for this beyond just the database and file arenas.

LINQ Makes Life Easier

Design vs. Coding – How Much Is Too Much?

September 7, 2007


A while ago I got asked the following question:

I’ve been wondering something about
OOD that perhaps many developers think about, and that is the
relationship between the amount of time used for design (eg. drawing
UML) and actual coding.
What’s your take on this? Do you like to design everything from the
high-level till the ground-level with UML? What UML diagrams you find
most usefull? Or do you think that using pure TDD makes UML somewhat

Since I got introduced to agile methods, I was also introduced very early to the concept of “code” as a design tool. To be more specific, this is the concept that everyone has come to know formally as Test Driven Development. When practicing TDD/BDD the goal is to write the test before the code that you are wanting to create actually exists. Don’t forget what we are doing here, we are writing “code” for “code” that does not yet exist. This gives us a completely blank slate to work with. This allows us to code the object with the exact “behaviors” that we would want it to exhibit and the API through which we interact with it. After you have been doing TDD for a while the “explicit thought” of “I should write a test” becomes a habit, so you write the test first without even thinking about it. This is great. When you have been doing TDD for a while.

The problem lies for the people who are fresh into TDD and they are still in that uncomfortable period where it does not yet feel natural to write the test first. As much as TDD is about design it is also about “moving ahead”. When you find yourself staring at a screen and lost for which direction to drive out a test the answer is simple “Get Up In Front Of A WhiteBoard and Draw!!!” (I am making the assumption here that you have already tried talking with your team and you are all collectively at a loss for where to begin). If you are feeling stuck and unable to move ahead one of the following diagrams :

  • UML Class Diagram
  • UML Sequence Diagram

may help to generate some ideas in your head that can serve as a starting point for helping you to drive out the test. Of all of the UML diagrams, the only ones I find myself using anymore are:

  • UML Class Diagram
  • UML Sequence Diagram
  • UML State Diagram

Even on teams of experienced agile practitioners, it can often be beneficial to whiteboard an high level idea that you are having to quickly share information with the team and solicit feedback. These should be quick sanity check sessions where you are just brainstorming. It is not a 2 hour session where you are drawing out a complicated class/sequence diagram that will almost definitely not correlate to the resulting code that is produced. 

One of the most effective tools that a developer has in their arsenal to convey design ideas quickly (aside from unit tests) is a whiteboard and marker. So even though you may currently feel stuck with respects to which direction to take test; the act of getting up and trying to draw down a quick UML sketch can get the creative juices flowing and provide you with the ammo and direction with which to get back to the computer and start writing the test. Which is ultimately the design exercise you want to move to, as it is a “specification” that is able to adapt to the code that it is targeting (unlike a static UML diagram).

People starting out in TDD get very uncomfortable when they are often faced with the realization that the design skills they thought they had are actually not really that proficient. TDD brings this to the forefront because design is something that is being done constantly. Most developers that I know who are familiar with UML are able get in front of a whiteboard and come up with a quick sketch of a proposed object model they are thinking about. Put the majority of those developers in front of studio with an test fixture and they often flounder.

It is time that people started developing a different set of design skills. The skill of designing an application by “coding in reverse”. One of the best ways I try to describe TDD when I am pairing with someone is to “code it like it’s there and it has the exact API that you want to use”. Once people can take that phrase and literally apply it to a test they are writing for an object; they have crossed , IMHO , the largest learning gap on the road to effective TDD.

Do I think that design is dead? Absolutely not. TDD is first and foremost about design, not testing. It is a practice that requires discipline on the part of all the members of a development team. One of the startling revelations that people who are skeptical about TDD encounter when introduced into a team practicing it, is that the emphasis on solid design is actually much higher than other teams they have been a part of. Why? Because design is something that is happening almost every minute of every iteration. Developers working on stories are driving out the design of components one test at a time. They are driving out the communication between disparate layers in the system one test at a time. All of these “design” artifacts brought together with the accompanying implementation code, over the course of (x) iterations result in a piece of software that has been actively designed and developed over the entire lifecycle.

At the end of the day, I no longer see any value in BDUF. Let me stress this point, “this does not mean you turn a blind eye to changes that are coming in future iteration that may (most likely will) require change”. More importantly you build the system with tests as a design tool that enable you to keep your objects following:

  • Single Responsibility Principle
  • HollyWood Principle
  • Dependency Inversion principle
  • …….

This will ensure that when change comes (and it will) the effort required to  accomodate the change will not be a negative one. And will you be able to drive out the “design” and implementation of the change by following a set of repeatable practices that eventually will become second nature for you, if you stick with it.

Design vs. Coding – How Much Is Too Much?