Challenges that led me here
- Heated arguments at work regarding how much TDD is enough and how little is too little. How do we find common ground?
- An acknowledgement of technical debt and a confusion about how to leverage it. How much debt is too much?
- Being labeled as pedantic and a zealot. Is a Zero-Defect Mindset ever worthwhile? When?
- Learning exercise in how we can gain concrete insights using our intuition in a methodical fashion. How can I communicate abstract ideas without concrete evidence in a rigorous manner?
This article represents my lessons learned from this exploration.
Making the Abstract Concrete
It was a normal day at work, myself and another co-worker were strongly and passionately arguing for the benefits of strict, pure, clean TDD against a couple of other equally passionate co-workers who were sold on the idea of everything in moderation. Having just completed a four month full time Agile immersion with an amazing albeit very idealistic consultant, his ideas about a zero-defect mindset and the idea that it was practically achievable were seductive. I had entertained my own idealistic fantasies for a while never really thinking they could or should be taken so seriously.
It was liberating.
Also, it was isolating. Having these thoughts, and that excitement placed me on one extreme side of a continuum with many of my other teammates on the other side or somewhere in the middle, nearer to the side of limiting TDD in the name of practicality. Conversation after conversation, debate after debate, we ended in the same place, perhaps even galvanized a bit by the disagreement and a bit further from finding common ground.
I finally came to understand that regardless of what I knew to be right, everyone on my team had their own perception and their own knowledge of what was right as well. That's not sarcasm. In social interactions there are multiple realities and all of them need to be appreciated and considered valid enough to be worth understanding.
How could I model my perception of reality in some sort of a concrete way that would enable me to make rigorous (albeit somewhat subjective) predictions? How could I ensure my mental model was at least self-consistent and work-able? Like self-respecting geek, I decided the best way to model uncertainty was to run thousands of simulations and projections of reality to see what lessons could be gleaned.
Finding Common Ground in a Common Purpose
The first decision I had to make was figuring out the underlying metric I would use to compare the two development methodologies. Having been just recently introduced to systems thinking and the Theory of Constraints, I thought a great start would be to use the value throughput of the companies.
But what is value? When we speak of delivering value to our business customers what is it we are actually delivering? In discussions with my team, we decided that business value is best seen as the present day value of your company were it to be valued by an external party. For the purposes of the simulation, I assume the value delivered by completed stories to be equivalent to some randomly assigned numbers provided by a value distribution and assigned without regard for feature size. That's right, it means a feature that takes next to nothing to develop may create an enormous amount of value for the company.
For further assumptions and specifics of my model, read on.
- User Story- In this simulation, a User Story is the smallest unit of work that the Product Development Team can work on that provides the slightest bit of business value. They also have an associated size.
- Business Customers- Generates a random set of randomly sized (to a discrete distribution) stories each iteration. Their value and size are also randomly assigned upon creation.
- Product Backlog- Repository for all stories. New stories are all added as a top priority in the order delivered. Bugs are randomly dispersed into the Product Backlog when they are received.
- Product Development Team- Anyone and everyone responsible for getting the release out the door. This includes programmers, testers, technical writers, etc., etc. The Product Development Team iterates over the Product Backlog and works to complete stories. They also are the ones deciding the cost of the various stories. Over time the speed of their work can go up if a range (minimum velocity and maximum velocity) > 1 is specified on construction. The function which controls the team's performance improvement is an "Experience Curve" as documented here: http://en.wikipedia.org/wiki/Experience_curve_effects Without getting too into it, this experience curve essentially models the decreasing cost of development over time.
- End Users- Who the Product Development Team releases to. Because the Product Development Team includes *everyone* needed to release the software, the End User may receive the software immediately afterwards. End Users discover bugs in the software. This is currently set to a constant rate per story per iteration. So if the defect rate is 1%, then a team with a hundred stories complete can expect to have, give or take, one story per iteration reenter the Product Backlog as a new Bug Story. The size of the Bug Story is randomly determined based upon a discrete bug size distribution.
- Bug Story- A Bug Story is a story that is focused on fixing a defect in the software. These stories are unlike normal stories in that they have no real value for the team and thus don't improve throughput. A Bug Story actually represents more of an opportunity cost as valuable work could be done in its place if the Bug Story hadn't needed to be written.
- Support Team- Who the bugs are reported to. Currently only really used to track the total bug count. Could be used in the future to eliminate bugs due to "user error".
- Lowering the defect rate, even at the cost of reduced performance, results in higher value throughput in the long term. Lowering the defect rate in the short term however is hardly ever optimal.
- A higher defect rate results in a much higher spread of possible value throughput... in other words there's a higher variance in what you can expect in terms of value output from a product development team.
- Every development team has a point where the highest possible testing and quality rigor begins to outperform the less rigorous teams. The trick is identifying where this begins to happen for your particular company or project.
- How do we find common ground? Share our assumptions and make them explicit. Codify them so that they can't be conveniently shifted when the arguments get uncomfortable.
- How much debt is too much? I didn't model technical debt in terms of needed refactoring... just in terms of defect likelihood. Too much debt is so much that you spend most time paying maintenance costs than delivering value.
- Is a Zero-Defect Mindset ever worthwhile? When? Yes it is, when you have set a goal of a sufficiently large life time for your product.
- How can I communicate abstract ideas without concrete evidence in a rigorous manner? Hopefully I just did.